This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "".
The branch, next has been updated discards ba35a19abe1f5d3022975ce9b14a30ff3c4c1ae5 (commit) discards 9a03587efe2571aaf9de561781b6c43d2e6a554f (commit) discards 493bfccaae853707eeb6c08144c7e0a253cba761 (commit) discards f275346606fdf487975cdef6654f81987c2df666 (commit) discards 73d3de15ddcf99ac67fe14a59ba6e2f70907bcfe (commit) discards 23e7745272bd405483da737824af25e2e18c8b21 (commit) discards 4be29b50a3de7fed08d427f0fab38ae61548d3e2 (commit) discards 26e0820a7bc833239a8a66bc15d2eab5fd3edb87 (commit) discards 22c01aac0d2ff868a02a20d394e4c763b1093ec1 (commit) discards cbe195a283c703d84eceaa38e81bc305627074c1 (commit) discards 6f7d32d9fa1ff48f3907f8ae9c9d61d02c1e5eee (commit) discards 66e87bb0ae69813a7d69a0a357347ebe60f99aba (commit) discards 4f87885fe0d7301cc8ff7da6d925c18a09e7e6be (commit) discards ec84962de6f2b392bbde9e8ebc802781979b95dc (commit) discards 6b07d871100f5eb92a14e8b5eec56a4bf014c60a (commit) discards 5ea562d5d0c4f292ede6d366e205641b0db6c454 (commit) discards 9fc7566c08ea131ea24c8f95e5ac92b1b45c1aa4 (commit) discards 88339aef26f169fe01f06e158835b1d2f3617db7 (commit) discards fffeaa22246f15a4b7c6c41d4bcfd7943c2b33cc (commit) discards 02ab1ec8f2c3e9fd6934c296d68f22ba33620706 (commit) discards 3f5429ed2383056b0a085b99db6d180c39afa85e (commit) discards b3ef83fd050912b82dd2dcaa2063ef1e8c12ee4d (commit) discards f0eb70696dd46d7fd35e221dfc86733ae26f38ad (commit) discards f47f91e7e867fb55e5f41df66b0c4ef6fbf3a293 (commit) discards c310ba4fe97a50d419274d709d9f57f2659fa3e6 (commit) discards 0c33f00fa6ee0e7df4ce1276b160c6e51a74e546 (commit) discards 62fdec5f1125eb881081ab5b520e65d71319fb71 (commit) discards 0b7865e710caedbe88719c6ceeb79e94aa8a292d (commit) discards 38e3b3d6ccf37d172fd7cdf691f0f769a64efb75 (commit) discards f31f2fbb7d6cddec6f7f239ddff67cd1b23878a8 (commit) discards 363747697803d1b081a425e62a26dce13a788852 (commit) discards f6a1813955fbbfd3e710dc949d239ab2ec499a45 (commit) discards 0f8ba06581b69c75b0567af14aa86ff7a1d1c20a (commit) discards 57096147b4c4577ee2c3ff0b6ab10a8fbdd339b5 (commit) discards 87e2afd61019341264dcb2dc7d4593c33fd426aa (commit) discards d0771f708492246c8a8d9b10ff550f29196064dc (commit) discards f06971cf35f58b6ed30a46446c185bd6a65ba388 (commit) discards 7412355450f0a78bd7cfe112a8520869e91e9647 (commit) discards 6e57710dc7eb24c3c91a30e6d1f5d9cd3ad9976a (commit) discards 7e93b42f889e544bbf63576e36d2f1b313fd7a4e (commit) discards 2b994eae58d69d84126bd9db613f630884220d77 (commit) discards da5eb63ba1048b0f18e6c436bb655d0649dc1bd7 (commit) discards 3c67b97bd06590235287d06d18b662c93aa64842 (commit) discards 62164116ae9f072ddeaa3206eaf3b24c8dc9c884 (commit) discards fdb1aa2981dbc68c63e11abd2663294338b13764 (commit) discards 023d4a942e89f86e66c6d5226d6e792936c37df3 (commit) discards e9b7c770c39c2409da68619a8ecc9090acda0c12 (commit) discards 3c2cc60893a9b1fe62962df73bfe35b9dbc319c2 (commit) discards af005d182ee0eb332cab3fa8f62cdb91135bcaa2 (commit) discards 942908c478c41038bbf1d360b664d53b42f7f76e (commit) discards 579fac5fead538bfb4f052e5842ebc8176bbf5ea (commit) discards 36c59903f1f898567d2a8eafbf5dc0eee93b2a58 (commit) discards 516c8bdf27bebf42ddc726b6ce6dd65201ee5bef (commit) discards 642c8293c63752aa5f07e2e828cdfcf4ca3c2e62 (commit) discards 804c1667832794c43a1a4867b778b31e7a104e9a (commit) discards 9e4bca41f1df50b1cf1f324118bb669f9bee0369 (commit) discards f27d3d3bdd745a000b58a7ce95d77707b957c903 (commit) discards 55c4f518ecd510d2d812e3fc766d6690f65871c3 (commit) discards 2deb9c98440a06f0b2e6733693f58d081059dc0d (commit) discards cbc97ae9237f1a206d1f4e25ae7e98ad99d430cb (commit) discards cb77ea8b33d82fa8a44223e37d07c1656afc78ff (commit) discards d30b8d8e6286ad2e2fe1385c538b92ae09af0539 (commit) discards 1aa2cb2bfe1b5aa5fe807e6de6cc091a3da3c4e1 (commit) discards 64365a76728f27c901e9e07e260a5d72e3ebec4e (commit) discards 91b08bd322279d5e32b936df0a94eeb6f0d32701 (commit) discards 5881b8ce192c6db97f69c3b143d03d662bd72cc0 (commit) discards 69346e799127ddbaa0f2610788ee28e31af86cc8 (commit) discards bd9c24a059564750acbed7063075f3ff6707fdbf (commit) discards 6e3e76bf1f5a29dc8b7d48d12e4a1f96addd16c5 (commit) discards e27cbd208c45c6eb288216d844eb354a7e17eace (commit) discards 37431455c373feeb019519820635d074747e0635 (commit) discards 16e45d1628bb53ce77715302829282eb78ff4b9e (commit) discards 3d94bb0bbbae3799b2d1a7ff929ca57815325f23 (commit) discards 8b84bdceecaf918c5411d373d68738b969fd0036 (commit) discards 0894bfe88936f1fd8fb8b116589df1705dadff72 (commit) discards b16f7a8635e3c67df767750e3ed9a2122d2e0ee4 (commit) discards 4a2a867baa2129b48b4ea799b4de4b78a533861d (commit) discards ead9d135a0387b94f2d9b1c38e10e19e025225bf (commit) discards 68b28492c93e9a2d740dbec95ed4d4fb58d7df3a (commit) discards 2235e332502e6eedee0e12d10d4db47addd258fa (commit) discards 008b29f015ba5102bd172acbe4611ad331e8a47f (commit) discards c7450055c888bc58b51ba3f8997cac2ca34af836 (commit) discards 0971ff44dd6a94d3d8bd9e2411d0a762909113da (commit) discards 5c9d45b42d98e76000a2beb1067f522d8b0a4266 (commit) discards b204792f6ea65546cc4753e3c70f594910f14254 (commit) discards f77ef2709e6ae8ceed3b72fc1cf64349e8c10189 (commit) discards f38ed7bb7f66d0b13ee8c876a0f9060ffcfb3741 (commit) discards 7390ea030a226fe9c56f8c82c25bc061f1aa6ec3 (commit) discards da118e3efd40f09d119ae73401cc3539530f0cce (commit) discards f208ef6db79d0678b0031597790913da4e1cd747 (commit) discards b9810af713c632e00f05dfbdaee2531428893033 (commit) discards 9ef719bac0bbb62947c8a53170084a486e6a10fa (commit) discards b16cfc6a848251a49aed921a16ed5f3cd7ec25e9 (commit) discards ea852650a2e9ee662befe8a3f60578421084688e (commit) discards be72e2e1c8f9cd0a64e83ad0a2cfe3b812a620a3 (commit) discards 7dd92fa53bf2a1cca599992ae1f3092af7740225 (commit) discards ddd6a2da8a950e5c1e42aa00b655774b5b766172 (commit) discards 9db160472e259daf9d9f7eb4a6ca1c526f956b50 (commit) discards 36a0c349b4b611e6d057400c3472483e622fdfd4 (commit) discards 596122e77aec9a00e3ab50bac770616f7ff24c02 (commit) discards f64ed6f67d2bc7677f3cb5f4400d913c4a1158b9 (commit) discards ea4363fc05c3f9533679aaa798d663409cca6b09 (commit) discards 63498a1225b83d30123a43b0a4b09ab48067e32d (commit) discards 2f5a5abd73cef24203d563bc7c7969fc1671eec1 (commit) discards a97664227a945b6156bdda7cb48e4f2d31d065bc (commit) discards ecb9b40f65a753da1ff4f84f88993413ba43385f (commit) discards a3d20d1a6c00367298b386d0b7a7c52afd86f530 (commit) discards d27efe3b23ddba7713e262cc754775f262a81ad1 (commit) discards 0e178219d99fb1ed65ffcc24dfc0e899ef3ffe70 (commit) discards ba009ab4187fdd1781c85ec5092c72f3f44b03b3 (commit) discards 5c06d5e6a2a3550f40311e40649c7e7fc1708614 (commit) discards c74e163eca7161367a7e60fbf42d010913cee051 (commit) discards e9534cef8c84589d179058cfd3757e514f96958c (commit) discards f6184cc74cc6b875a63154191f9126f3b731cb73 (commit) discards da4b959ebbd116e6f427401a9843871335261a4d (commit) discards 623416766d925e837ed4c19fb528dfa654862bf1 (commit) discards 9436be8cbada1263a8b33a4b04c135d4fc217a40 (commit) discards 9a37920864f8eb7b789d850102c5369f9d931e67 (commit) discards 010bfd1604531765bd9117240b052f657512d023 (commit) discards c3bb8cace001c7b2faf7db335b01016c6be70fbc (commit) discards fada6435dbc536bdd0a940960a2f84f41261f1f8 (commit) discards 73c931d6e2ceacb59b92d3c0ddf26b27c9e56a3b (commit) discards 585d6d4a8d00f1dbb780e913f6568b6f2cd3d667 (commit) discards 84cc70b3a5bc8eb32930b32ed2959f02d6cdda2f (commit) discards 3bf9c56b4add89fe124d42538aa82dff924601c5 (commit) discards 6f690b85f94a137141e29860f86d67b1b6c82eec (commit) discards e55b15e933b486b85f35c9279d0c5ed646a2fbde (commit) discards 01cc89e7558b865c33674b7c607cad30bb659afa (commit) via 0b1dc8a9a69252ce56d13521284683faff0e3e35 (commit) via 4132497ff67871d67c088c99b782cba10817bd28 (commit) via cd83d5d1114a40abf554c59bd92c8a8199b10c7a (commit) via 112209c3fc672aee4a3074aa784aefe00f32d250 (commit) via 1837825e219b6bd62a3ecda2ed68873a958b8171 (commit) via fda9a9e4887eff3f0526c7bcef5e29ff511cd4d8 (commit) via 4aaa74fceecee7b1538546d0b67347569c1239c6 (commit) via 4d0b7588251ed6a5de781f9220c3ace2831a68b2 (commit) via 41febe9fe6ac762174579e86874d65ec8a2c5485 (commit) via 6e09a7a90079b6789b83d9234306ac02a8f6d8da (commit) via 6388e398998426c4b18cde893131d54630879bdf (commit) via ca3b3950cc901d0e8db1f2bb9961c3bac9491c88 (commit) via dd3031076b80cf6d8b9df6024c44555c77b150fb (commit) via dfeba061509fb1451351ffb168a458e7d9ae4126 (commit) via f20066c7949466430186cce217589910fa75fd61 (commit) via 9fe4043f4cf402a4695e8c0e4887e87da60fcb33 (commit) via 4d9de3d54c49ec69d2366a01ad5b2b987943a5c5 (commit) via 970f7a4ae91932c49d1b9dc00bfa861f7f2a0197 (commit) via 8ae51b0364e25e45eddb4cf2e175b269c0736436 (commit) via 76543549b422a53dde44de5900071554f65aa212 (commit) via 2c060a9d3b9f067f4cca2094be845e54392077ec (commit) via b6e1fea7ef8a443579c8f197c96ca2acc7c0577d (commit) via fe182fb0a97c1989747ae96b401a10d34c878480 (commit) via 82c67c0b1755f4d4e0f5b1df2e6356150cca4166 (commit) via 80a30e1513a9614622e08657b94dca56db9e250f (commit) via 8ab727c5f3a5b5aa556a84e04870f1f3fe3a073b (commit) via 2b36166d647c64aa545e6ecc23a1d464fcd2c3c0 (commit) via d3a7028a5708506b63dc6a06846cd05c7552bbf4 (commit) via 4208ef3e2ce9f6093ae540af7de20759849782b6 (commit) via dba0f42c9e24d090b29df165f610fc0df051b018 (commit) via 46b88467e667e26fed282b234c481a14fcecff62 (commit) via f195caa92ef8457c2c670fd3449ea6521e7ad823 (commit) via 7f97683b1afc4826825f0db0fcac40858892494a (commit) via 52592157f25c9e2e3876dc3624cf91b1b71127ad (commit) via 4a320c0af291c07f33b1a295f72704215169d562 (commit) via f5cde8870a425d51d08df7a1b4b3fb1cf06406f0 (commit) via c3830c3936b76faa423563bbd104e732120f9523 (commit) via cc52f0675d1674a80cf1806dc8c1c4e3887afdd1 (commit) via 4b698023210b7f742c053707ba131097b570276d (commit) via 2eb3e87bc56b2a02cb10637e5ce3a7d1157472cf (commit) via ba203281cfd10b88a5d5b8f143ea34d14d373b58 (commit) via 527ee67cb434e5e7c8015fa8c7d15f2ac25b1d20 (commit) via 101e8188088b91e8d85e0fef0d6674dae05c306e (commit) via 343579dda50fcefd5498cfe146a438b8fdb3c065 (commit) via b4e0a8b91a422cbf28e0406a5076025894103984 (commit) via a611a514682dea61ca142b51a28194a39a286fa7 (commit) via d345d75c975bd98f61bc2e04907b3e232d88083c (commit) via 7e40217271ae17ce19abd873140439c51a525fb1 (commit) via fbc400dc8c35c220cbb41531d12c933f7c4226d1 (commit) via 6cef872afcac78016d095d426fa9f3d9055c3856 (commit) via ffb22c37e4a483cc647c8ba8f4a9329fa83639aa (commit) via 3fd85fb6f45d859e6f19eeadda69992858f06f22 (commit) via e91877df47118468e940a58047d94fe4195e4b1e (commit) via 4194d93ab8095ef850e332a1433d8d810b7418a1 (commit) via f723bcef66945acb0738acc8a40b8ebd5851b84d (commit) via fa4063b4104784bdc1c20fe3b519716e4413c245 (commit) via 9c4d778148d514adf8586939123acdcdc022e8e5 (commit) via b06ee329f944aeb7f3d03646aac384f88a00a7a5 (commit) via a2676059469f61f1ffd58090b74f4dd975d172ac (commit) via 39acf771084aa4f16b60a6bdf9e5f3bed4f88cd9 (commit) via 0d6d0923b2dd4d3097ea992af76408fd4281d84e (commit) via 2997a78f270cdb34c82f805f8103f660ed1bcdf2 (commit) via bacd73a34768ce859f8136f29bda70bbccbdb45e (commit) via 63ef9b3714c9410dd1b5a55e3bd50de49f23dfcb (commit) via 4ee154864b47712a45cfdb23ea6c22b46bfb1abf (commit) via 552e46339939933ee7ed305f1dda82ead362ece9 (commit) via 1d61093f4ea7a9f62cc69e6fdb6fb82b246af817 (commit) via 6d78d33df6d33ebe1b933383c4858df5e9f7f33b (commit) via 4cf84d158adc7e84bed69ceac34bbbb3dee9587e (commit) via 67abee1a4548878ccc93b57fbd84c3fe68147bf6 (commit) via afb2ecf5e45e10b0e1258c85fc1b80f8ce447646 (commit) via fe9c6cc8e5e88e068ee9f1f4dc29b7f32411f4d7 (commit) via 6b5b78245c2ebbc0c907dba9809e9002c7214959 (commit) via ba23aa731a85709a84ea0137a918b07cf4811fc2 (commit) via 4b7e6b82f14ef2d09b91b1c197e84dbfe0e8a09b (commit) via 683a975963d150e5dae12649cbd1abc003ee2c39 (commit) via dcffa7faecf3bf4a66e6f5ce745bcde3a4f0b7ed (commit) via c247659448bfd45c1bb648d7508f7db0b225b7b8 (commit) via 95918aa496a22794af654c582830e2a2d8b914a7 (commit) via 7bcd45812c020a9e67cf4d848e5bbef1384f58af (commit) via 9e95d6a1a9025bfafb846b4d805b1dc146657a10 (commit) via bd4a9492d862e0636fba99bd76aaa19952de2f44 (commit) via d858987a1dfea90625389ab5a8e14379a23dbb52 (commit) via 01ea9db19e2eb2f978d4fd22b1e341a741bb1e9c (commit) via 0a0e4a684f8e5420295eda3df4fece6361b4d797 (commit) via c6dc829d0c6a54a08756e13e2f3388f0bda61245 (commit) via a296693d3dbbe98a5616406d6535dee85cbd31ba (commit) via ec7a8e0fabe2269cf824ab809ad52a8763739be1 (commit) via 02c46a3a671bca6de5159a59be45663bca516753 (commit) via 123327606c2dd95a6a85c80e74ad172932195631 (commit) via 936ce9f30a85285f70e26038eb5ea8637622fea2 (commit) via ac2d233f2bcfb8d70fa005ae8fa19cce41b4a238 (commit) via 97f2672e48ea08abd00ed7f3b1fd8420a217779c (commit) via d9615289bf6f60bd3a04f8138d5b487efda96a49 (commit) via b9877d54ec4c2259dec17751f6580f110fd447a5 (commit) via ac563d15b95d884764ffa2f48eedce6f5b408fca (commit) via 279ecc54b69ad1621fbba837bee31adcc9fd704a (commit) via ad7f8f4ea11a8e40d853cd9b2b0bc3e6f7876a8b (commit) via 50cfbff244a1fa29314520eb9ca9bdf5df445df6 (commit) via 234dd2f623d73b069e43282baf96b48473d0ef1c (commit) via b93fd7af775a04c50a064a241d82ba3b7bf999f7 (commit) via 1e1312c15e96be77eccc0fb8e3aa35d4c7da72f6 (commit) via eda2ce7c6d9998155edde42617501cbaea5e03f5 (commit) via 9b648c4cd201ff8fbcbaf12512482b0a8f952d8f (commit) via f62c9aa3f3f48080a61a1e71cd649f1d65539ff5 (commit) via 244916315fdcb665f66b4f235782feee26a53c89 (commit) via e5d2edccc685fcde88793f5514d1fdb2654ecfa4 (commit) via 52553fa3972109a47e948321236dcabf29503f61 (commit) via 4b751b4dd7e2f000d2ed0268d51878c5ff982c2b (commit) via 21dad81b51443c49dc51ede63516f4a434e217cc (commit) via 474dac39b8d4ad7f2ac3768887e36610188c16c2 (commit) via b8c6689e4c81277067a2dd5f30598e0ddc7dc2c5 (commit) via ff56734fa58db5043a6bb358611cdc2a1c5de4a3 (commit) via fd86c7fd19820e21c182e2c0e043331e6aab6282 (commit) via 4a58145d0ca4e62ff41b052ed800acee7d0a97e1 (commit) via b2d4275c2df6a7fcd9796fafed74b59292cee26e (commit) via 5ffbdadc12513adb3e8847692d51963cab223084 (commit) via c709c9f1f235551c27f8db14f7c947f334b10d98 (commit) via 5d2595d1b470497c7fa04fe698f79a5f0346e5b8 (commit) via ad6219e3682d94fb8971261936ace46399ed2085 (commit) via 47b343a8f508b1dad511a1498999caa876c12eb5 (commit) via 5d35ffb5d02c6d3f4cabd170969846b56878f8fd (commit) via 24c6e10cba9d6efeaf13e4ea2a7a422764ae5eb2 (commit) via b18dc645a68fd0ae50f1a361aa408b3e74e46c0d (commit) via 29dde30264da8106d64328b0d81eb19dad0065ac (commit) via 92ae191876c39a3f5d4fba63e58dd9e8c444239b (commit) via 6104182f253161f17fd3ca5a377ed1bac62e366e (commit) via 0142d42744e606257c083ec0ac0077e0cd554e21 (commit) via 2b2ff3f281e17d19a3a7a00a3c8ca5460751cd6f (commit) via e454f36a4a7c70dafd87eab03b3ba740dc10860a (commit) via 27a05a1d61f3b7e4d58ac0e89db43db069dcc9a3 (commit) via 9c0fc2112137a07b5265646338a4034662d299b3 (commit) via 0e2b3a3597d139df1ff5d3f0338aa5cf26e34a43 (commit) via 320bb2e9a29256de37bfdb10b9dde3fd0f0d4a5d (commit) via 65d66b94d66e6b954f3430b90a67eaeca7bf37c6 (commit) via ba9fedae2040d39f71fa8aee6db9512c5cfe21e4 (commit) via ed2364544979bbb83f68bf61ea15e9fb86e2e994 (commit) via 900c9ca3d4099178a328433d32d84a825264aaa1 (commit) via d660e0e1647d7f1a6c6698b6f1b79b4590b84e1f (commit) via 42d0b99c453f269fbe21cc92652440bdbbd10ba4 (commit) via d25d276c649f57021f2bd3575430735ff146a025 (commit)
This update added new revisions after undoing existing revisions. That is to say, the old revision is not a strict subset of the new revision. This situation occurs when you --force push a change and generate a repository containing something like this:
* -- * -- B -- O -- O -- O (ba35a19abe1f5d3022975ce9b14a30ff3c4c1ae5) \ N -- N -- N (0b1dc8a9a69252ce56d13521284683faff0e3e35)
When this happens we assume that you've already had alert emails for all of the O revisions, and so we here report only the revisions in the N branch from the common base, B.
Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below.
- Log ----------------------------------------------------------------- commit 0b1dc8a9a69252ce56d13521284683faff0e3e35 Author: Petri Savolainen petri.savolainen@nokia.com Date: Tue Jan 10 11:19:09 2017 +0200
validation: packet: limit number of failed asserts
When data compare fails on one byte, it will likely fail on many other bytes and flood CUnit output with asserts. Limit the number of failed asserts to one.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/packet/packet.c b/test/common_plat/validation/api/packet/packet.c index a48b238..fa5206f 100644 --- a/test/common_plat/validation/api/packet/packet.c +++ b/test/common_plat/validation/api/packet/packet.c @@ -1564,12 +1564,20 @@ void packet_test_extend_small(void) CU_ASSERT(odp_packet_copy_to_mem(pkt, 0, len, buf) == 0);
for (i = 0; i < len; i++) { + int match; + if (tail) { - /* assert needs brackets */ - CU_ASSERT(buf[i] == (i % 256)); + match = (buf[i] == (i % 256)); + CU_ASSERT(match); } else { - CU_ASSERT(buf[len - 1 - i] == (i % 256)); + match = (buf[len - 1 - i] == (i % 256)); + CU_ASSERT(match); } + + /* Limit the number of failed asserts to + one per packet */ + if (!match) + break; }
odp_packet_free(pkt); @@ -1613,13 +1621,6 @@ void packet_test_extend_large(void) ext_len = len / div; cur_len = ext_len;
- div++; - if (div > num_div) { - /* test extend head */ - div = 1; - tail = 0; - } - pkt = odp_packet_alloc(pool, ext_len); CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID);
@@ -1678,15 +1679,30 @@ void packet_test_extend_large(void) CU_ASSERT(odp_packet_copy_to_mem(pkt, 0, len, buf) == 0);
for (i = 0; i < len; i++) { + int match; + if (tail) { - /* assert needs brackets */ - CU_ASSERT(buf[i] == (i % 256)); + match = (buf[i] == (i % 256)); + CU_ASSERT(match); } else { - CU_ASSERT(buf[len - 1 - i] == (i % 256)); + match = (buf[len - 1 - i] == (i % 256)); + CU_ASSERT(match); } + + /* Limit the number of failed asserts to + one per packet */ + if (!match) + break; }
odp_packet_free(pkt); + + div++; + if (div > num_div) { + /* test extend head */ + div = 1; + tail = 0; + } }
CU_ASSERT(odp_pool_destroy(pool) == 0); @@ -1782,12 +1798,20 @@ void packet_test_extend_mix(void) CU_ASSERT(odp_packet_copy_to_mem(pkt, 0, len, buf) == 0);
for (i = 0; i < len; i++) { + int match; + if (tail) { - /* assert needs brackets */ - CU_ASSERT(buf[i] == (i % 256)); + match = (buf[i] == (i % 256)); + CU_ASSERT(match); } else { - CU_ASSERT(buf[len - 1 - i] == (i % 256)); + match = (buf[len - 1 - i] == (i % 256)); + CU_ASSERT(match); } + + /* Limit the number of failed asserts to + one per packet */ + if (!match) + break; }
odp_packet_free(pkt);
commit 4132497ff67871d67c088c99b782cba10817bd28 Author: Petri Savolainen petri.savolainen@nokia.com Date: Tue Jan 10 11:19:08 2017 +0200
validation: packet: add line number to compare data checks
Added line number to identify which compare data call fails.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/packet/packet.c b/test/common_plat/validation/api/packet/packet.c index cf11c01..a48b238 100644 --- a/test/common_plat/validation/api/packet/packet.c +++ b/test/common_plat/validation/api/packet/packet.c @@ -32,11 +32,19 @@ static struct udata_struct { "abcdefg", };
-static void _packet_compare_data(odp_packet_t pkt1, odp_packet_t pkt2) +#define packet_compare_offset(pkt1, off1, pkt2, off2, len) \ + _packet_compare_offset((pkt1), (off1), (pkt2), (off2), (len), __LINE__) + +#define packet_compare_data(pkt1, pkt2) \ + _packet_compare_data((pkt1), (pkt2), __LINE__) + +static void _packet_compare_data(odp_packet_t pkt1, odp_packet_t pkt2, + int line) { uint32_t len = odp_packet_len(pkt1); uint32_t offset = 0; uint32_t seglen1, seglen2, cmplen; + int ret;
CU_ASSERT_FATAL(len == odp_packet_len(pkt2));
@@ -47,7 +55,14 @@ static void _packet_compare_data(odp_packet_t pkt1, odp_packet_t pkt2) CU_ASSERT_PTR_NOT_NULL_FATAL(pkt1map); CU_ASSERT_PTR_NOT_NULL_FATAL(pkt2map); cmplen = seglen1 < seglen2 ? seglen1 : seglen2; - CU_ASSERT(!memcmp(pkt1map, pkt2map, cmplen)); + ret = memcmp(pkt1map, pkt2map, cmplen); + + if (ret) { + printf("\ncompare_data failed: line %i, offset %" + PRIu32 "\n", line, offset); + } + + CU_ASSERT(ret == 0);
offset += cmplen; len -= cmplen; @@ -422,7 +437,7 @@ void packet_test_event_conversion(void) tmp_pkt = odp_packet_from_event(ev); CU_ASSERT_FATAL(tmp_pkt != ODP_PACKET_INVALID); CU_ASSERT(tmp_pkt == pkt); - _packet_compare_data(tmp_pkt, pkt); + packet_compare_data(tmp_pkt, pkt); }
void packet_test_basic_metadata(void) @@ -1062,9 +1077,10 @@ static void _packet_compare_udata(odp_packet_t pkt1, odp_packet_t pkt2)
static void _packet_compare_offset(odp_packet_t pkt1, uint32_t off1, odp_packet_t pkt2, uint32_t off2, - uint32_t len) + uint32_t len, int line) { uint32_t seglen1, seglen2, cmplen; + int ret;
if (off1 + len > odp_packet_len(pkt1) || off2 + len > odp_packet_len(pkt2)) @@ -1079,7 +1095,15 @@ static void _packet_compare_offset(odp_packet_t pkt1, uint32_t off1, cmplen = seglen1 < seglen2 ? seglen1 : seglen2; if (len < cmplen) cmplen = len; - CU_ASSERT(!memcmp(pkt1map, pkt2map, cmplen)); + + ret = memcmp(pkt1map, pkt2map, cmplen); + + if (ret) { + printf("\ncompare_offset failed: line %i, off1 %" + PRIu32 ", off2 %" PRIu32 "\n", line, off1, off2); + } + + CU_ASSERT(ret == 0);
off1 += cmplen; off2 += cmplen; @@ -1102,7 +1126,7 @@ void packet_test_copy(void)
pkt = odp_packet_copy(test_packet, odp_packet_pool(test_packet)); CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); - _packet_compare_data(pkt, test_packet); + packet_compare_data(pkt, test_packet); pool = odp_packet_pool(pkt); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); pkt_copy = odp_packet_copy(pkt, pool); @@ -1113,7 +1137,7 @@ void packet_test_copy(void) CU_ASSERT(odp_packet_len(pkt) == odp_packet_len(pkt_copy));
_packet_compare_inflags(pkt, pkt_copy); - _packet_compare_data(pkt, pkt_copy); + packet_compare_data(pkt, pkt_copy); CU_ASSERT(odp_packet_user_area_size(pkt) == odp_packet_user_area_size(test_packet)); _packet_compare_udata(pkt, pkt_copy); @@ -1122,7 +1146,7 @@ void packet_test_copy(void)
pkt = odp_packet_copy(test_packet, packet_pool_double_uarea); CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); - _packet_compare_data(pkt, test_packet); + packet_compare_data(pkt, test_packet); pool = odp_packet_pool(pkt); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); pkt_copy = odp_packet_copy(pkt, pool); @@ -1133,7 +1157,7 @@ void packet_test_copy(void) CU_ASSERT(odp_packet_len(pkt) == odp_packet_len(pkt_copy));
_packet_compare_inflags(pkt, pkt_copy); - _packet_compare_data(pkt, pkt_copy); + packet_compare_data(pkt, pkt_copy); CU_ASSERT(odp_packet_user_area_size(pkt) == 2 * odp_packet_user_area_size(test_packet)); _packet_compare_udata(pkt, pkt_copy); @@ -1152,7 +1176,7 @@ void packet_test_copy(void) CU_ASSERT(odp_packet_data(pkt) != odp_packet_data(pkt_part)); CU_ASSERT(odp_packet_len(pkt) == odp_packet_len(pkt_part));
- _packet_compare_data(pkt, pkt_part); + packet_compare_data(pkt, pkt_part); odp_packet_free(pkt_part);
plen = odp_packet_len(pkt); @@ -1160,14 +1184,14 @@ void packet_test_copy(void) pkt_part = odp_packet_copy_part(pkt, i, plen / 4, pool); CU_ASSERT_FATAL(pkt_part != ODP_PACKET_INVALID); CU_ASSERT(odp_packet_len(pkt_part) == plen / 4); - _packet_compare_offset(pkt_part, 0, pkt, i, plen / 4); + packet_compare_offset(pkt_part, 0, pkt, i, plen / 4); odp_packet_free(pkt_part); }
/* Test copy and move apis */ CU_ASSERT(odp_packet_copy_data(pkt, 0, plen - plen / 8, plen / 8) == 0); - _packet_compare_offset(pkt, 0, pkt, plen - plen / 8, plen / 8); - _packet_compare_offset(pkt, 0, test_packet, plen - plen / 8, plen / 8); + packet_compare_offset(pkt, 0, pkt, plen - plen / 8, plen / 8); + packet_compare_offset(pkt, 0, test_packet, plen - plen / 8, plen / 8);
/* Test segment crossing if we support segments */ pkt_data = odp_packet_offset(pkt, 0, &seg_len, NULL); @@ -1183,7 +1207,7 @@ void packet_test_copy(void)
pkt_part = odp_packet_copy_part(pkt, src_offset, 20, pool); CU_ASSERT(odp_packet_move_data(pkt, dst_offset, src_offset, 20) == 0); - _packet_compare_offset(pkt, dst_offset, pkt_part, 0, 20); + packet_compare_offset(pkt, dst_offset, pkt_part, 0, 20);
odp_packet_free(pkt_part); odp_packet_free(pkt); @@ -1232,7 +1256,7 @@ void packet_test_copydata(void) 1) == 0); }
- _packet_compare_offset(pkt, 0, test_packet, 0, pkt_len / 2); + packet_compare_offset(pkt, 0, test_packet, 0, pkt_len / 2); odp_packet_free(pkt);
pkt = odp_packet_alloc(odp_packet_pool(segmented_test_packet), @@ -1242,9 +1266,9 @@ void packet_test_copydata(void) CU_ASSERT(odp_packet_copy_from_pkt(pkt, 0, segmented_test_packet, odp_packet_len(pkt) / 4, odp_packet_len(pkt)) == 0); - _packet_compare_offset(pkt, 0, segmented_test_packet, - odp_packet_len(pkt) / 4, - odp_packet_len(pkt)); + packet_compare_offset(pkt, 0, segmented_test_packet, + odp_packet_len(pkt) / 4, + odp_packet_len(pkt)); odp_packet_free(pkt); }
@@ -1266,14 +1290,14 @@ void packet_test_concatsplit(void)
CU_ASSERT(odp_packet_concat(&pkt, pkt2) >= 0); CU_ASSERT(odp_packet_len(pkt) == pkt_len * 2); - _packet_compare_offset(pkt, 0, pkt, pkt_len, pkt_len); + packet_compare_offset(pkt, 0, pkt, pkt_len, pkt_len);
CU_ASSERT(odp_packet_split(&pkt, pkt_len, &pkt2) == 0); CU_ASSERT(pkt != pkt2); CU_ASSERT(odp_packet_data(pkt) != odp_packet_data(pkt2)); CU_ASSERT(odp_packet_len(pkt) == odp_packet_len(pkt2)); - _packet_compare_data(pkt, pkt2); - _packet_compare_data(pkt, test_packet); + packet_compare_data(pkt, pkt2); + packet_compare_data(pkt, test_packet);
odp_packet_free(pkt); odp_packet_free(pkt2); @@ -1283,26 +1307,26 @@ void packet_test_concatsplit(void) CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); pkt_len = odp_packet_len(pkt);
- _packet_compare_data(pkt, segmented_test_packet); + packet_compare_data(pkt, segmented_test_packet); CU_ASSERT(odp_packet_split(&pkt, pkt_len / 2, &splits[0]) == 0); CU_ASSERT(pkt != splits[0]); CU_ASSERT(odp_packet_data(pkt) != odp_packet_data(splits[0])); CU_ASSERT(odp_packet_len(pkt) == pkt_len / 2); CU_ASSERT(odp_packet_len(pkt) + odp_packet_len(splits[0]) == pkt_len);
- _packet_compare_offset(pkt, 0, segmented_test_packet, 0, pkt_len / 2); - _packet_compare_offset(splits[0], 0, segmented_test_packet, - pkt_len / 2, odp_packet_len(splits[0])); + packet_compare_offset(pkt, 0, segmented_test_packet, 0, pkt_len / 2); + packet_compare_offset(splits[0], 0, segmented_test_packet, + pkt_len / 2, odp_packet_len(splits[0]));
CU_ASSERT(odp_packet_concat(&pkt, splits[0]) >= 0); - _packet_compare_offset(pkt, 0, segmented_test_packet, 0, pkt_len / 2); - _packet_compare_offset(pkt, pkt_len / 2, segmented_test_packet, - pkt_len / 2, pkt_len / 2); - _packet_compare_offset(pkt, 0, segmented_test_packet, 0, - pkt_len); + packet_compare_offset(pkt, 0, segmented_test_packet, 0, pkt_len / 2); + packet_compare_offset(pkt, pkt_len / 2, segmented_test_packet, + pkt_len / 2, pkt_len / 2); + packet_compare_offset(pkt, 0, segmented_test_packet, 0, + pkt_len);
CU_ASSERT(odp_packet_len(pkt) == odp_packet_len(segmented_test_packet)); - _packet_compare_data(pkt, segmented_test_packet); + packet_compare_data(pkt, segmented_test_packet);
CU_ASSERT(odp_packet_split(&pkt, pkt_len / 2, &splits[0]) == 0); CU_ASSERT(odp_packet_split(&pkt, pkt_len / 4, &splits[1]) == 0); @@ -1316,7 +1340,7 @@ void packet_test_concatsplit(void) CU_ASSERT(odp_packet_concat(&pkt, splits[0]) >= 0);
CU_ASSERT(odp_packet_len(pkt) == odp_packet_len(segmented_test_packet)); - _packet_compare_data(pkt, segmented_test_packet); + packet_compare_data(pkt, segmented_test_packet);
odp_packet_free(pkt); } @@ -1803,9 +1827,9 @@ void packet_test_align(void) /* Alignment doesn't change packet length or contents */ CU_ASSERT(odp_packet_len(pkt) == pkt_len); (void)odp_packet_offset(pkt, offset, &aligned_seglen, NULL); - _packet_compare_offset(pkt, offset, - segmented_test_packet, offset, - aligned_seglen); + packet_compare_offset(pkt, offset, + segmented_test_packet, offset, + aligned_seglen);
/* Verify requested contiguous addressabilty */ CU_ASSERT(aligned_seglen >= seg_len + 2); @@ -1825,8 +1849,8 @@ void packet_test_align(void) aligned_data = odp_packet_offset(pkt, offset, &aligned_seglen, NULL);
CU_ASSERT(odp_packet_len(pkt) == pkt_len); - _packet_compare_offset(pkt, offset, segmented_test_packet, offset, - aligned_seglen); + packet_compare_offset(pkt, offset, segmented_test_packet, offset, + aligned_seglen); CU_ASSERT((uintptr_t)aligned_data % max_align == 0);
odp_packet_free(pkt);
commit cd83d5d1114a40abf554c59bd92c8a8199b10c7a Author: Petri Savolainen petri.savolainen@nokia.com Date: Tue Jan 10 11:19:07 2017 +0200
linux-gen: packet: replace base_len with constant
Only packets used base_len of buffer header. Replace the struct field with constant. This improves performance as the constant data is not read any more and buffer header size is smaller.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 326c025..076abe9 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -56,7 +56,6 @@ struct odp_buffer_hdr_t { /* Initial buffer data pointer and length */ uint8_t *base_data; uint8_t *buf_end; - uint32_t base_len;
/* Max data size */ uint32_t size; @@ -64,9 +63,6 @@ struct odp_buffer_hdr_t { /* Pool type */ int8_t type;
- /* Event type. Maybe different than pool type (crypto compl event) */ - int8_t event_type; - /* Burst counts */ uint8_t burst_num; uint8_t burst_first; @@ -97,6 +93,9 @@ struct odp_buffer_hdr_t { /* User area size */ uint32_t uarea_size;
+ /* Event type. Maybe different than pool type (crypto compl event) */ + int8_t event_type; + /* Burst table */ struct odp_buffer_hdr_t *burst[BUFFER_BURST_SIZE];
diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index d3f521f..f632a51 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -20,6 +20,9 @@ #include <stdio.h> #include <inttypes.h>
+/* Initial packet segment data length */ +#define BASE_LEN CONFIG_PACKET_MAX_SEG_LEN + static inline odp_packet_t packet_handle(odp_packet_hdr_t *pkt_hdr) { return (odp_packet_t)pkt_hdr->buf_hdr.handle.handle; @@ -260,7 +263,7 @@ static inline void init_segments(odp_packet_hdr_t *pkt_hdr[], int num) hdr = pkt_hdr[0];
hdr->buf_hdr.seg[0].data = hdr->buf_hdr.base_data; - hdr->buf_hdr.seg[0].len = hdr->buf_hdr.base_len; + hdr->buf_hdr.seg[0].len = BASE_LEN;
/* Link segments */ if (CONFIG_PACKET_MAX_SEGS != 1) { @@ -273,7 +276,7 @@ static inline void init_segments(odp_packet_hdr_t *pkt_hdr[], int num) buf_hdr = &pkt_hdr[i]->buf_hdr; hdr->buf_hdr.seg[i].hdr = buf_hdr; hdr->buf_hdr.seg[i].data = buf_hdr->base_data; - hdr->buf_hdr.seg[i].len = buf_hdr->base_len; + hdr->buf_hdr.seg[i].len = BASE_LEN; } } } @@ -709,7 +712,7 @@ static inline uint32_t pack_seg_tail(odp_packet_hdr_t *pkt_hdr, int seg) odp_buffer_hdr_t *hdr = pkt_hdr->buf_hdr.seg[seg].hdr; uint32_t len = pkt_hdr->buf_hdr.seg[seg].len; uint8_t *src = pkt_hdr->buf_hdr.seg[seg].data; - uint8_t *dst = hdr->base_data + hdr->base_len - len; + uint8_t *dst = hdr->base_data + BASE_LEN - len;
if (dst != src) { memmove(dst, src, len); @@ -777,19 +780,17 @@ static inline uint32_t fill_seg_tail(odp_packet_hdr_t *pkt_hdr, int dst_seg, static inline int move_data_to_head(odp_packet_hdr_t *pkt_hdr, int segs) { int dst_seg, src_seg; - uint32_t base_len, len, free_len; + uint32_t len, free_len; uint32_t moved = 0;
- base_len = pkt_hdr->buf_hdr.base_len; - for (dst_seg = 0; dst_seg < segs; dst_seg++) { len = pack_seg_head(pkt_hdr, dst_seg); moved += len;
- if (len == base_len) + if (len == BASE_LEN) continue;
- free_len = base_len - len; + free_len = BASE_LEN - len;
for (src_seg = dst_seg + 1; src_seg < segs; src_seg++) { len = fill_seg_head(pkt_hdr, dst_seg, src_seg, @@ -816,19 +817,17 @@ static inline int move_data_to_head(odp_packet_hdr_t *pkt_hdr, int segs) static inline int move_data_to_tail(odp_packet_hdr_t *pkt_hdr, int segs) { int dst_seg, src_seg; - uint32_t base_len, len, free_len; + uint32_t len, free_len; uint32_t moved = 0;
- base_len = pkt_hdr->buf_hdr.base_len; - for (dst_seg = segs - 1; dst_seg >= 0; dst_seg--) { len = pack_seg_tail(pkt_hdr, dst_seg); moved += len;
- if (len == base_len) + if (len == BASE_LEN) continue;
- free_len = base_len - len; + free_len = BASE_LEN - len;
for (src_seg = dst_seg - 1; src_seg >= 0; src_seg--) { len = fill_seg_tail(pkt_hdr, dst_seg, src_seg, @@ -857,12 +856,11 @@ static inline void reset_seg(odp_packet_hdr_t *pkt_hdr, int first, int num) odp_buffer_hdr_t *hdr; void *base; int i; - uint32_t base_len = pkt_hdr->buf_hdr.base_len;
for (i = first; i < first + num; i++) { hdr = pkt_hdr->buf_hdr.seg[i].hdr; base = hdr->base_data; - pkt_hdr->buf_hdr.seg[i].len = base_len; + pkt_hdr->buf_hdr.seg[i].len = BASE_LEN; pkt_hdr->buf_hdr.seg[i].data = base; } } @@ -891,7 +889,6 @@ int odp_packet_extend_head(odp_packet_t *pkt, uint32_t len, odp_packet_hdr_t *new_hdr; int new_segs = 0; int free_segs = 0; - uint32_t base_len = pkt_hdr->buf_hdr.base_len; uint32_t offset;
num = num_segments(frame_len + len); @@ -932,7 +929,7 @@ int odp_packet_extend_head(odp_packet_t *pkt, uint32_t len, }
frame_len += len; - offset = (segs * base_len) - frame_len; + offset = (segs * BASE_LEN) - frame_len;
pkt_hdr->buf_hdr.seg[0].data += offset; pkt_hdr->buf_hdr.seg[0].len -= offset; @@ -1058,7 +1055,6 @@ int odp_packet_extend_tail(odp_packet_t *pkt, uint32_t len, odp_packet_hdr_t *new_hdr; int new_segs = 0; int free_segs = 0; - uint32_t base_len = pkt_hdr->buf_hdr.base_len; uint32_t offset;
num = num_segments(frame_len + len); @@ -1091,7 +1087,7 @@ int odp_packet_extend_tail(odp_packet_t *pkt, uint32_t len, }
frame_len += len; - offset = (segs * base_len) - frame_len; + offset = (segs * BASE_LEN) - frame_len;
pkt_hdr->buf_hdr.seg[segs - 1].len -= offset;
diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index d288bd6..090a55f 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -272,7 +272,6 @@ static void init_buffers(pool_t *pool)
/* Store base values for fast init */ buf_hdr->base_data = buf_hdr->seg[0].data; - buf_hdr->base_len = buf_hdr->seg[0].len; buf_hdr->buf_end = &data[offset + pool->data_size + pool->tailroom];
commit 112209c3fc672aee4a3074aa784aefe00f32d250 Author: Petri Savolainen petri.savolainen@nokia.com Date: Tue Jan 10 11:19:06 2017 +0200
linux-gen: packet: optimize alloc and init
Convert buffer and pool handles to pointers only once and pass pointers between functions (instead of handles).
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index 4915bda..b0805ac 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -91,17 +91,15 @@ static inline pool_t *pool_entry_from_hdl(odp_pool_t pool_hdl) return &pool_tbl->pool[_odp_typeval(pool_hdl)]; }
-static inline odp_buffer_hdr_t *buf_hdl_to_hdr(odp_buffer_t buf) +static inline odp_buffer_hdr_t *pool_buf_hdl_to_hdr(pool_t *pool, + odp_buffer_t buf) { odp_buffer_bits_t handle; - uint32_t pool_id, index, block_offset; - pool_t *pool; + uint32_t index, block_offset; odp_buffer_hdr_t *buf_hdr;
handle.handle = buf; - pool_id = handle.pool_id; index = handle.index; - pool = pool_entry(pool_id); block_offset = index * pool->block_size;
/* clang requires cast to uintptr_t */ @@ -110,6 +108,19 @@ static inline odp_buffer_hdr_t *buf_hdl_to_hdr(odp_buffer_t buf) return buf_hdr; }
+static inline odp_buffer_hdr_t *buf_hdl_to_hdr(odp_buffer_t buf) +{ + odp_buffer_bits_t handle; + uint32_t pool_id; + pool_t *pool; + + handle.handle = buf; + pool_id = handle.pool_id; + pool = pool_entry(pool_id); + + return pool_buf_hdl_to_hdr(pool, buf); +} + int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], odp_buffer_hdr_t *buf_hdr[], int num); void buffer_free_multi(const odp_buffer_t buf[], int num_free); diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 4397889..d3f521f 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -251,37 +251,32 @@ static inline void packet_init(odp_packet_hdr_t *pkt_hdr, uint32_t len, pkt_hdr->input = ODP_PKTIO_INVALID; }
-static inline odp_packet_hdr_t *init_segments(odp_buffer_t buf[], int num) +static inline void init_segments(odp_packet_hdr_t *pkt_hdr[], int num) { - odp_packet_hdr_t *pkt_hdr; + odp_packet_hdr_t *hdr; int i;
- /* First buffer is the packet descriptor */ - pkt_hdr = odp_packet_hdr((odp_packet_t)buf[0]); + /* First segment is the packet descriptor */ + hdr = pkt_hdr[0];
- pkt_hdr->buf_hdr.seg[0].data = pkt_hdr->buf_hdr.base_data; - pkt_hdr->buf_hdr.seg[0].len = pkt_hdr->buf_hdr.base_len; + hdr->buf_hdr.seg[0].data = hdr->buf_hdr.base_data; + hdr->buf_hdr.seg[0].len = hdr->buf_hdr.base_len;
/* Link segments */ - if (odp_unlikely(CONFIG_PACKET_MAX_SEGS != 1)) { - pkt_hdr->buf_hdr.segcount = num; + if (CONFIG_PACKET_MAX_SEGS != 1) { + hdr->buf_hdr.segcount = num;
if (odp_unlikely(num > 1)) { for (i = 1; i < num; i++) { - odp_packet_hdr_t *hdr; - odp_buffer_hdr_t *b_hdr; + odp_buffer_hdr_t *buf_hdr;
- hdr = odp_packet_hdr((odp_packet_t)buf[i]); - b_hdr = &hdr->buf_hdr; - - pkt_hdr->buf_hdr.seg[i].hdr = hdr; - pkt_hdr->buf_hdr.seg[i].data = b_hdr->base_data; - pkt_hdr->buf_hdr.seg[i].len = b_hdr->base_len; + buf_hdr = &pkt_hdr[i]->buf_hdr; + hdr->buf_hdr.seg[i].hdr = buf_hdr; + hdr->buf_hdr.seg[i].data = buf_hdr->base_data; + hdr->buf_hdr.seg[i].len = buf_hdr->base_len; } } } - - return pkt_hdr; }
/* Calculate the number of segments */ @@ -338,9 +333,10 @@ static inline void copy_num_segs(odp_packet_hdr_t *to, odp_packet_hdr_t *from, static inline odp_packet_hdr_t *alloc_segments(pool_t *pool, int num) { odp_buffer_t buf[num]; + odp_packet_hdr_t *pkt_hdr[num]; int ret;
- ret = buffer_alloc_multi(pool, buf, NULL, num); + ret = buffer_alloc_multi(pool, buf, (odp_buffer_hdr_t **)pkt_hdr, num); if (odp_unlikely(ret != num)) { if (ret > 0) buffer_free_multi(buf, ret); @@ -348,7 +344,9 @@ static inline odp_packet_hdr_t *alloc_segments(pool_t *pool, int num) return NULL; }
- return init_segments(buf, num); + init_segments(pkt_hdr, num); + + return pkt_hdr[0]; }
static inline odp_packet_hdr_t *add_segments(odp_packet_hdr_t *pkt_hdr, @@ -461,8 +459,10 @@ static inline int packet_alloc(pool_t *pool, uint32_t len, int max_pkt, int num = max_pkt; int max_buf = max_pkt * num_seg; odp_buffer_t buf[max_buf]; + odp_packet_hdr_t *pkt_hdr[max_buf];
- num_buf = buffer_alloc_multi(pool, buf, NULL, max_buf); + num_buf = buffer_alloc_multi(pool, buf, (odp_buffer_hdr_t **)pkt_hdr, + max_buf);
/* Failed to allocate all segments */ if (odp_unlikely(num_buf != max_buf)) { @@ -479,13 +479,14 @@ static inline int packet_alloc(pool_t *pool, uint32_t len, int max_pkt, }
for (i = 0; i < num; i++) { - odp_packet_hdr_t *pkt_hdr; + odp_packet_hdr_t *hdr;
/* First buffer is the packet descriptor */ - pkt[i] = (odp_packet_t)buf[i * num_seg]; - pkt_hdr = init_segments(&buf[i * num_seg], num_seg); + pkt[i] = (odp_packet_t)buf[i * num_seg]; + hdr = pkt_hdr[i * num_seg]; + init_segments(&pkt_hdr[i * num_seg], num_seg);
- packet_init(pkt_hdr, len, parse); + packet_init(hdr, len, parse); }
return num; diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 626e277..d288bd6 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -606,6 +606,7 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], uint32_t mask, i; pool_cache_t *cache; uint32_t cache_num, num_ch, num_deq, burst; + odp_buffer_hdr_t *hdr;
cache = local.cache[pool->pool_idx];
@@ -624,9 +625,13 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], }
/* Get buffers from the cache */ - for (i = 0; i < num_ch; i++) + for (i = 0; i < num_ch; i++) { buf[i] = cache->buf[cache_num - num_ch + i];
+ if (odp_likely(buf_hdr != NULL)) + buf_hdr[i] = pool_buf_hdl_to_hdr(pool, buf[i]); + } + /* If needed, get more from the global pool */ if (odp_unlikely(num_deq)) { /* Temporary copy needed since odp_buffer_t is uintptr_t @@ -647,13 +652,11 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], uint32_t idx = num_ch + i;
buf[idx] = (odp_buffer_t)(uintptr_t)data[i]; + hdr = pool_buf_hdl_to_hdr(pool, buf[idx]); + odp_prefetch(hdr);
- if (buf_hdr) { - buf_hdr[idx] = buf_hdl_to_hdr(buf[idx]); - /* Prefetch newly allocated and soon to be used - * buffer headers. */ - odp_prefetch(buf_hdr[idx]); - } + if (odp_likely(buf_hdr != NULL)) + buf_hdr[idx] = hdr; }
/* Cache extra buffers. Cache is currently empty. */ @@ -666,11 +669,6 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], cache->num = cache_num - num_ch; }
- if (buf_hdr) { - for (i = 0; i < num_ch; i++) - buf_hdr[i] = buf_hdl_to_hdr(buf[i]); - } - return num_ch + num_deq; }
commit 1837825e219b6bd62a3ecda2ed68873a958b8171 Author: Petri Savolainen petri.savolainen@nokia.com Date: Tue Jan 10 11:19:05 2017 +0200
linux-gen: packet: clean and pack packet header struct
Optimized buffer and packet header struct cache usage by: * removing unused fields * packed remaining fields * arrange fields for more optimal cache line usage
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 6149290..326c025 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -50,59 +50,73 @@ typedef union odp_buffer_bits_t {
/* Common buffer header */ struct odp_buffer_hdr_t { - struct odp_buffer_hdr_t *next; /* next buf in a list--keep 1st */ - union { /* Multi-use secondary link */ - struct odp_buffer_hdr_t *prev; - struct odp_buffer_hdr_t *link; - }; - odp_buffer_bits_t handle; /* handle */ + /* Handle union */ + odp_buffer_bits_t handle;
- int burst_num; - int burst_first; - struct odp_buffer_hdr_t *burst[BUFFER_BURST_SIZE]; + /* Initial buffer data pointer and length */ + uint8_t *base_data; + uint8_t *buf_end; + uint32_t base_len; + + /* Max data size */ + uint32_t size;
+ /* Pool type */ + int8_t type; + + /* Event type. Maybe different than pool type (crypto compl event) */ + int8_t event_type; + + /* Burst counts */ + uint8_t burst_num; + uint8_t burst_first; + + /* Segment count */ + uint8_t segcount; + + /* Segments */ struct { void *hdr; uint8_t *data; - /* Used only if _ODP_PKTIO_IPC is set. - * ipc mapped process can not walk over pointers, - * offset has to be used */ - uint64_t ipc_data_offset; uint32_t len; } seg[CONFIG_PACKET_MAX_SEGS];
- /* max data size */ - uint32_t size; - - /* Initial buffer data pointer and length */ - uint8_t *base_data; - uint32_t base_len; - uint8_t *buf_end; - - union { - uint32_t all; - struct { - uint32_t hdrdata:1; /* Data is in buffer hdr */ - }; - } flags; + /* Next buf in a list */ + struct odp_buffer_hdr_t *next;
- int8_t type; /* buffer type */ - odp_event_type_t event_type; /* for reuse as event */ - odp_pool_t pool_hdl; /* buffer pool handle */ + /* User context pointer or u64 */ union { - uint64_t buf_u64; /* user u64 */ - void *buf_ctx; /* user context */ - const void *buf_cctx; /* const alias for ctx */ + uint64_t buf_u64; + void *buf_ctx; + const void *buf_cctx; /* const alias for ctx */ }; - void *uarea_addr; /* user area address */ - uint32_t uarea_size; /* size of user area */ - uint32_t segcount; /* segment count */ - uint32_t segsize; /* segment size */ + + /* User area pointer */ + void *uarea_addr; + + /* User area size */ + uint32_t uarea_size; + + /* Burst table */ + struct odp_buffer_hdr_t *burst[BUFFER_BURST_SIZE]; + + /* Used only if _ODP_PKTIO_IPC is set. + * ipc mapped process can not walk over pointers, + * offset has to be used */ + uint64_t ipc_data_offset; + + /* Pool handle */ + odp_pool_t pool_hdl;
/* Data or next header */ uint8_t data[0]; };
+ODP_STATIC_ASSERT(CONFIG_PACKET_MAX_SEGS < 256, + "CONFIG_PACKET_MAX_SEGS_TOO_LARGE"); + +ODP_STATIC_ASSERT(BUFFER_BURST_SIZE < 256, "BUFFER_BURST_SIZE_TOO_LARGE"); + /* Forward declarations */ int seg_alloc_tail(odp_buffer_hdr_t *buf_hdr, int segcount); void seg_free_tail(odp_buffer_hdr_t *buf_hdr, int segcount); diff --git a/platform/linux-generic/include/odp_packet_internal.h b/platform/linux-generic/include/odp_packet_internal.h index d09231e..e6e9d74 100644 --- a/platform/linux-generic/include/odp_packet_internal.h +++ b/platform/linux-generic/include/odp_packet_internal.h @@ -114,13 +114,14 @@ typedef union { uint32_t all;
struct { + /** adjustment for traffic mgr */ + uint32_t shaper_len_adj:8; + /* Bitfield flags for each output option */ uint32_t l3_chksum_set:1; /**< L3 chksum bit is valid */ uint32_t l3_chksum:1; /**< L3 chksum override */ uint32_t l4_chksum_set:1; /**< L3 chksum bit is valid */ uint32_t l4_chksum:1; /**< L4 chksum override */ - - int8_t shaper_len_adj; /**< adjustment for traffic mgr */ }; } output_flags_t;
@@ -154,9 +155,9 @@ typedef struct { uint32_t l3_len; /**< Layer 3 length */ uint32_t l4_len; /**< Layer 4 length */
- layer_t parsed_layers; /**< Highest parsed protocol stack layer */ uint16_t ethtype; /**< EtherType */ - uint8_t ip_proto; /**< IP protocol */ + uint8_t ip_proto; /**< IP protocol */ + uint8_t parsed_layers; /**< Highest parsed protocol stack layer */
} packet_parser_t;
@@ -171,22 +172,33 @@ typedef struct { /* common buffer header */ odp_buffer_hdr_t buf_hdr;
- /* Following members are initialized by packet_init() */ + /* + * Following members are initialized by packet_init() + */ + packet_parser_t p;
+ odp_pktio_t input; + uint32_t frame_len; uint32_t headroom; uint32_t tailroom;
- odp_pktio_t input; + /* + * Members below are not initialized by packet_init() + */ + + /* Flow hash value */ + uint32_t flow_hash;
- /* Members below are not initialized by packet_init() */ - odp_queue_t dst_queue; /**< Classifier destination queue */ + /* Timestamp value */ + odp_time_t timestamp;
- uint32_t flow_hash; /**< Flow hash value */ - odp_time_t timestamp; /**< Timestamp value */ + /* Classifier destination queue */ + odp_queue_t dst_queue;
- odp_crypto_generic_op_result_t op_result; /**< Result for crypto */ + /* Result for crypto */ + odp_crypto_generic_op_result_t op_result;
/* Packet data storage */ uint8_t data[0]; diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 58b6f32..4397889 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -1506,7 +1506,7 @@ int odp_packet_align(odp_packet_t *pkt, uint32_t offset, uint32_t len, return 0; shift = align - misalign; } else { - if (len > pkt_hdr->buf_hdr.segsize) + if (len > pkt_hdr->buf_hdr.size) return -1; shift = len - seglen; uaddr -= shift; diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 932efe3..626e277 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -264,7 +264,6 @@ static void init_buffers(pool_t *pool) /* Show user requested size through API */ buf_hdr->uarea_size = pool->params.pkt.uarea_size; buf_hdr->segcount = 1; - buf_hdr->segsize = seg_size;
/* Pointer to data start (of the first segment) */ buf_hdr->seg[0].hdr = buf_hdr; diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index d9cb9f3..aafe567 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -500,9 +500,6 @@ int odp_queue_enq(odp_queue_t handle, odp_event_t ev) queue = queue_to_qentry(handle); buf_hdr = buf_hdl_to_hdr(odp_buffer_from_event(ev));
- /* No chains via this entry */ - buf_hdr->link = NULL; - return queue->s.enqueue(queue, buf_hdr); }
diff --git a/platform/linux-generic/pktio/ipc.c b/platform/linux-generic/pktio/ipc.c index c9df043..377f20e 100644 --- a/platform/linux-generic/pktio/ipc.c +++ b/platform/linux-generic/pktio/ipc.c @@ -459,7 +459,7 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, if (odp_unlikely(pool == ODP_POOL_INVALID)) ODP_ABORT("invalid pool");
- data_pool_off = phdr->buf_hdr.seg[0].ipc_data_offset; + data_pool_off = phdr->buf_hdr.ipc_data_offset;
pkt = odp_packet_alloc(pool, phdr->frame_len); if (odp_unlikely(pkt == ODP_PACKET_INVALID)) { @@ -586,12 +586,12 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, (uint8_t *)odp_shm_addr(pool->shm);
/* compile all function code even if ipc disabled with config */ - pkt_hdr->buf_hdr.seg[0].ipc_data_offset = data_pool_off; + pkt_hdr->buf_hdr.ipc_data_offset = data_pool_off; IPC_ODP_DBG("%d/%d send packet %llx, pool %llx," "phdr = %p, offset %x\n", i, len, odp_packet_to_u64(pkt), odp_pool_to_u64(pool_hdl), - pkt_hdr, pkt_hdr->buf_hdr.seg[0].ipc_data_offset); + pkt_hdr, pkt_hdr->buf_hdr.ipc_data_offset); }
/* Put packets to ring to be processed by other process. */
commit fda9a9e4887eff3f0526c7bcef5e29ff511cd4d8 Author: Bill Fischofer bill.fischofer@linaro.org Date: Tue Jan 10 09:59:40 2017 -0600
linux-generic: pool: defer ring allocation until pool creation
To avoid excessive memory overhead for pools, defer the allocation of the pool ring until odp_pool_create() is called. This keeps pool memory overhead proportional to the number of pools actually in use rather than the architected maximum number of pools.
This patch addresses Bug https://bugs.linaro.org/show_bug.cgi?id=2765
Signed-off-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index 5d7b817..4915bda 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -69,7 +69,8 @@ typedef struct pool_t {
pool_cache_t local_cache[ODP_THREAD_COUNT_MAX];
- pool_ring_t ring; + odp_shm_t ring_shm; + pool_ring_t *ring;
} pool_t;
diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index cae2759..932efe3 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -143,7 +143,7 @@ static void flush_cache(pool_cache_t *cache, pool_t *pool) uint32_t mask; uint32_t cache_num, i, data;
- ring = &pool->ring.hdr; + ring = &pool->ring->hdr; mask = pool->ring_mask; cache_num = cache->num;
@@ -172,6 +172,7 @@ static pool_t *reserve_pool(void) { int i; pool_t *pool; + char ring_name[ODP_POOL_NAME_LEN];
for (i = 0; i < ODP_CONFIG_POOLS; i++) { pool = pool_entry(i); @@ -180,6 +181,19 @@ static pool_t *reserve_pool(void) if (pool->reserved == 0) { pool->reserved = 1; UNLOCK(&pool->lock); + sprintf(ring_name, "pool_ring_%d", i); + pool->ring_shm = + odp_shm_reserve(ring_name, + sizeof(pool_ring_t), + ODP_CACHE_LINE_SIZE, 0); + if (odp_unlikely(pool->ring_shm == ODP_SHM_INVALID)) { + ODP_ERR("Unable to alloc pool ring %d\n", i); + LOCK(&pool->lock); + pool->reserved = 0; + UNLOCK(&pool->lock); + break; + } + pool->ring = odp_shm_addr(pool->ring_shm); return pool; } UNLOCK(&pool->lock); @@ -214,7 +228,7 @@ static void init_buffers(pool_t *pool) int type; uint32_t seg_size;
- ring = &pool->ring.hdr; + ring = &pool->ring->hdr; mask = pool->ring_mask; type = pool->params.type;
@@ -411,7 +425,7 @@ static odp_pool_t pool_create(const char *name, odp_pool_param_t *params, pool->uarea_base_addr = odp_shm_addr(pool->uarea_shm); }
- ring_init(&pool->ring.hdr); + ring_init(&pool->ring->hdr); init_buffers(pool);
return pool->pool_hdl; @@ -536,6 +550,8 @@ int odp_pool_destroy(odp_pool_t pool_hdl) odp_shm_free(pool->uarea_shm);
pool->reserved = 0; + odp_shm_free(pool->ring_shm); + pool->ring = NULL; UNLOCK(&pool->lock);
return 0; @@ -592,8 +608,6 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], pool_cache_t *cache; uint32_t cache_num, num_ch, num_deq, burst;
- ring = &pool->ring.hdr; - mask = pool->ring_mask; cache = local.cache[pool->pool_idx];
cache_num = cache->num; @@ -620,6 +634,8 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], * and not uint32_t. */ uint32_t data[burst];
+ ring = &pool->ring->hdr; + mask = pool->ring_mask; burst = ring_deq_multi(ring, mask, data, burst); cache_num = burst - num_deq;
@@ -671,12 +687,12 @@ static inline void buffer_free_to_pool(uint32_t pool_id,
cache = local.cache[pool_id]; pool = pool_entry(pool_id); - ring = &pool->ring.hdr; - mask = pool->ring_mask;
/* Special case of a very large free. Move directly to * the global pool. */ if (odp_unlikely(num > CONFIG_POOL_CACHE_SIZE)) { + ring = &pool->ring->hdr; + mask = pool->ring_mask; for (i = 0; i < num; i++) ring_enq(ring, mask, (uint32_t)(uintptr_t)buf[i]);
@@ -691,6 +707,9 @@ static inline void buffer_free_to_pool(uint32_t pool_id, uint32_t index; int burst = CACHE_BURST;
+ ring = &pool->ring->hdr; + mask = pool->ring_mask; + if (odp_unlikely(num > CACHE_BURST)) burst = num;
commit 4aaa74fceecee7b1538546d0b67347569c1239c6 Author: Christophe Milard christophe.milard@linaro.org Date: Wed Dec 21 13:55:56 2016 +0100
linux-gen: _ishm: fixing typos
Fixing a set of iritating typos. just in comments.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 8b54be2..f889834 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -75,7 +75,7 @@ /* * Maximum number of internal shared memory blocks. * - * This the the number of separate ISHM areas that can be reserved concurrently + * This is the number of separate ISHM areas that can be reserved concurrently * (Note that freeing such blocks may take time, or possibly never happen * if some of the block ownwers never procsync() after free). This number * should take that into account) @@ -240,7 +240,7 @@ static void procsync(void); * Take a piece of the preallocated virtual space to fit "size" bytes. * (best fit). Size must be rounded up to an integer number of pages size. * Possibly split the fragment to keep track of remaining space. - * Returns the allocated fragment (best_fragmnt) and the corresponding address. + * Returns the allocated fragment (best_fragment) and the corresponding address. * External caller must ensure mutex before the call! */ static void *alloc_fragment(uintptr_t size, int block_index, intptr_t align, @@ -286,11 +286,11 @@ static void *alloc_fragment(uintptr_t size, int block_index, intptr_t align,
/* * if there is room between previous fragment and new one, (due to - * alignement requirement) then fragment (split) the space between + * alignment requirement) then fragment (split) the space between * the end of the previous fragment and the beginning of the new one: */ if (border - (uintptr_t)(*best_fragmnt)->start > 0) { - /* frangment space, i.e. take a new fragment descriptor... */ + /* fragment space, i.e. take a new fragment descriptor... */ rem_fragmnt = ishm_ftbl->unused_fragmnts; if (!rem_fragmnt) { ODP_ERR("unable to get shmem fragment descriptor!\n."); @@ -320,7 +320,7 @@ static void *alloc_fragment(uintptr_t size, int block_index, intptr_t align, if (remainder == 0) return (*best_fragmnt)->start;
- /* otherwise, frangment space, i.e. take a new fragment descriptor... */ + /* otherwise, fragment space, i.e. take a new fragment descriptor... */ rem_fragmnt = ishm_ftbl->unused_fragmnts; if (!rem_fragmnt) { ODP_ERR("unable to get shmem fragment descriptor!\n."); @@ -515,7 +515,7 @@ static void delete_file(ishm_block_t *block) * performs the mapping, possibly allocating a fragment of the pre-reserved * VA space if the _ODP_ISHM_SINGLE_VA flag was given. * Sets fd, and returns the mapping address. - * This funstion will also set the _ODP_ISHM_SINGLE_VA flag if the alignment + * This function will also set the _ODP_ISHM_SINGLE_VA flag if the alignment * requires it * Mutex must be assured by the caller. */ @@ -736,7 +736,7 @@ static void procsync(void)
last = ishm_proctable->nb_entries; while (i < last) { - /* if the procecess sequence number doesn't match the main + /* if the process sequence number doesn't match the main * table seq number, this entry is obsolete */ block = &ishm_tbl->block[ishm_proctable->entry[i].block_index]; @@ -1065,7 +1065,7 @@ static int block_free(int block_index) }
/* - * Free and unmap internal shared memory, intentified by its block number: + * Free and unmap internal shared memory, identified by its block number: * return -1 on error. 0 if OK. */ int _odp_ishm_free_by_index(int block_index) @@ -1081,7 +1081,7 @@ int _odp_ishm_free_by_index(int block_index) }
/* - * free and unmap internal shared memory, intentified by its block name: + * free and unmap internal shared memory, identified by its block name: * return -1 on error. 0 if OK. */ int _odp_ishm_free_by_name(const char *name) @@ -1492,8 +1492,8 @@ static int do_odp_ishm_term_local(void) * Go through the table of visible blocks for this process, * decreasing the refcnt of each visible blocks, and issuing * warning for those no longer referenced by any process. - * Note that non-referenced blocks are nor freeed: this is - * deliberate as this would imply that the sementic of the + * Note that non-referenced blocks are not freed: this is + * deliberate as this would imply that the semantic of the * freeing function would differ depending on whether we run * with odp_thread as processes or pthreads. With this approach, * the user should always free the blocks manually, which is @@ -1699,7 +1699,7 @@ int _odp_ishm_status(const char *title) fragmnt; fragmnt = fragmnt->next) nb_unused_frgments++;
- ODP_DBG("ishm: %d fragment used. %d fragements unused. (total=%d)\n", + ODP_DBG("ishm: %d fragment used. %d fragments unused. (total=%d)\n", nb_used_frgments, nb_unused_frgments, nb_used_frgments + nb_unused_frgments);
commit 4d0b7588251ed6a5de781f9220c3ace2831a68b2 Author: Christophe Milard christophe.milard@linaro.org Date: Wed Dec 21 12:11:33 2016 +0100
test: shm: checking exported vs imported block length
Checking that the block size returned by odp_shm_info() matches the exported block length.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/linux-generic/validation/api/shmem/shmem_odp2.c b/test/linux-generic/validation/api/shmem/shmem_odp2.c index e39dc76..7d8c682 100644 --- a/test/linux-generic/validation/api/shmem/shmem_odp2.c +++ b/test/linux-generic/validation/api/shmem/shmem_odp2.c @@ -28,6 +28,7 @@ int main(int argc, char *argv[]) odp_instance_t odp1; odp_instance_t odp2; odp_shm_t shm; + odp_shm_info_t info; test_shared_data_t *test_shared_data;
/* odp init: */ @@ -59,6 +60,13 @@ int main(int argc, char *argv[]) return 1; }
+ /* check that the read size matches the allocated size (in other ODP):*/ + if ((odp_shm_info(shm, &info)) || + (info.size != sizeof(*test_shared_data))) { + fprintf(stderr, "error: odp_shm_info failed.\n"); + return 1; + } + test_shared_data = odp_shm_addr(shm); if (test_shared_data == NULL) { fprintf(stderr, "error: odp_shm_addr failed.\n");
commit 41febe9fe6ac762174579e86874d65ec8a2c5485 Author: Christophe Milard christophe.milard@linaro.org Date: Wed Dec 21 12:11:32 2016 +0100
linux-gen: _ishm: exporting/importing user len and flags
The size of the shared memory and its user flag set, as given at reserve time, are exported and imported so that odp_shm_info() return proper values on imported blocks
Signed-off-by: Christophe Milard christophe.milard@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 33ef731..8b54be2 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -125,7 +125,9 @@ #define EXPORT_FILE_LINE3_FMT "file: %s" #define EXPORT_FILE_LINE4_FMT "length: %" PRIu64 #define EXPORT_FILE_LINE5_FMT "flags: %" PRIu32 -#define EXPORT_FILE_LINE6_FMT "align: %" PRIu32 +#define EXPORT_FILE_LINE6_FMT "user_length: %" PRIu64 +#define EXPORT_FILE_LINE7_FMT "user_flags: %" PRIu32 +#define EXPORT_FILE_LINE8_FMT "align: %" PRIu32 /* * A fragment describes a piece of the shared virtual address space, * and is allocated only when allocation is done with the _ODP_ISHM_SINGLE_VA @@ -481,7 +483,11 @@ static int create_file(int block_index, huge_flag_t huge, uint64_t len, new_block->filename); fprintf(export_file, EXPORT_FILE_LINE4_FMT "\n", len); fprintf(export_file, EXPORT_FILE_LINE5_FMT "\n", flags); - fprintf(export_file, EXPORT_FILE_LINE6_FMT "\n", align); + fprintf(export_file, EXPORT_FILE_LINE6_FMT "\n", + new_block->user_len); + fprintf(export_file, EXPORT_FILE_LINE7_FMT "\n", + new_block->user_flags); + fprintf(export_file, EXPORT_FILE_LINE8_FMT "\n", align);
fclose(export_file); } @@ -806,6 +812,10 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, else new_block->name[0] = 0;
+ /* save user data: */ + new_block->user_flags = user_flags; + new_block->user_len = size; + /* If a file descriptor is provided, get the real size and map: */ if (fd >= 0) { fstat(fd, &statbuf); @@ -921,9 +931,11 @@ int _odp_ishm_find_exported(const char *remote_name, pid_t external_odp_pid, FILE *export_file; uint64_t len; uint32_t flags; + uint64_t user_len; + uint32_t user_flags; uint32_t align; int fd; - int ret; + int block_index;
/* try to read the block description file: */ snprintf(export_filename, ISHM_FILENAME_MAXLEN, @@ -953,7 +965,13 @@ int _odp_ishm_find_exported(const char *remote_name, pid_t external_odp_pid, if (fscanf(export_file, EXPORT_FILE_LINE5_FMT " ", &flags) != 1) goto error_exp_file;
- if (fscanf(export_file, EXPORT_FILE_LINE6_FMT " ", &align) != 1) + if (fscanf(export_file, EXPORT_FILE_LINE6_FMT " ", &user_len) != 1) + goto error_exp_file; + + if (fscanf(export_file, EXPORT_FILE_LINE7_FMT " ", &user_flags) != 1) + goto error_exp_file; + + if (fscanf(export_file, EXPORT_FILE_LINE8_FMT " ", &align) != 1) goto error_exp_file;
fclose(export_file); @@ -970,13 +988,17 @@ int _odp_ishm_find_exported(const char *remote_name, pid_t external_odp_pid, flags &= ~(uint32_t)_ODP_ISHM_EXPORT;
/* reserve the memory, providing the opened file descriptor: */ - ret = _odp_ishm_reserve(local_name, 0, fd, align, flags, 0); - if (ret < 0) { + block_index = _odp_ishm_reserve(local_name, 0, fd, align, flags, 0); + if (block_index < 0) { close(fd); - return ret; + return block_index; }
- return ret; + /* set inherited info: */ + ishm_tbl->block[block_index].user_flags = user_flags; + ishm_tbl->block[block_index].user_len = user_len; + + return block_index;
error_exp_file: fclose(export_file); diff --git a/test/linux-generic/validation/api/shmem/shmem_linux.c b/test/linux-generic/validation/api/shmem/shmem_linux.c index 39473f3..2f4c762 100644 --- a/test/linux-generic/validation/api/shmem/shmem_linux.c +++ b/test/linux-generic/validation/api/shmem/shmem_linux.c @@ -102,7 +102,8 @@ */ static int read_shmem_attribues(uint64_t ext_odp_pid, const char *blockname, char *filename, uint64_t *len, - uint32_t *flags, uint32_t *align) + uint32_t *flags, uint64_t *user_len, + uint32_t *user_flags, uint32_t *align) { char shm_attr_filename[PATH_MAX]; FILE *export_file; @@ -130,6 +131,12 @@ static int read_shmem_attribues(uint64_t ext_odp_pid, const char *blockname, if (fscanf(export_file, "flags: %" PRIu32 " ", flags) != 1) goto export_file_read_err;
+ if (fscanf(export_file, "user_length: %" PRIu64 " ", user_len) != 1) + goto export_file_read_err; + + if (fscanf(export_file, "user_flags: %" PRIu32 " ", user_flags) != 1) + goto export_file_read_err; + if (fscanf(export_file, "align: %" PRIu32 " ", align) != 1) goto export_file_read_err;
@@ -192,6 +199,8 @@ int main(int argc __attribute__((unused)), char *argv[]) char shm_filename[PATH_MAX];/* shared mem device name, under /dev/shm */ uint64_t len; uint32_t flags; + uint64_t user_len; + uint32_t user_flags; uint32_t align; int shm_fd; test_shared_linux_data_t *addr; @@ -231,7 +240,8 @@ int main(int argc __attribute__((unused)), char *argv[])
/* read the shared memory attributes (includes the shm filename): */ if (read_shmem_attribues(odp_app1, ODP_SHM_NAME, - shm_filename, &len, &flags, &align) != 0) + shm_filename, &len, &flags, + &user_len, &user_flags, &align) != 0) test_failure(fifo_name, fifo_fd, odp_app1);
/* open the shm filename (which is either on /tmp or on hugetlbfs)
commit 6e09a7a90079b6789b83d9234306ac02a8f6d8da Author: Matias Elo matias.elo@nokia.com Date: Wed Dec 21 13:27:22 2016 +0200
validation: test creating pool and timer pool with no name
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/pool/pool.c b/test/common_plat/validation/api/pool/pool.c index d48ac2a..8687941 100644 --- a/test/common_plat/validation/api/pool/pool.c +++ b/test/common_plat/validation/api/pool/pool.c @@ -8,19 +8,14 @@ #include "odp_cunit_common.h" #include "pool.h"
-static int pool_name_number = 1; static const int default_buffer_size = 1500; static const int default_buffer_num = 1000;
static void pool_create_destroy(odp_pool_param_t *params) { odp_pool_t pool; - char pool_name[ODP_POOL_NAME_LEN];
- snprintf(pool_name, sizeof(pool_name), - "test_pool-%d", pool_name_number++); - - pool = odp_pool_create(pool_name, params); + pool = odp_pool_create(NULL, params); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); CU_ASSERT(odp_pool_to_u64(pool) != odp_pool_to_u64(ODP_POOL_INVALID)); diff --git a/test/common_plat/validation/api/timer/timer.c b/test/common_plat/validation/api/timer/timer.c index 0007639..1945afa 100644 --- a/test/common_plat/validation/api/timer/timer.c +++ b/test/common_plat/validation/api/timer/timer.c @@ -156,7 +156,7 @@ void timer_test_odp_timer_cancel(void) tparam.num_timers = 1; tparam.priv = 0; tparam.clk_src = ODP_CLOCK_CPU; - tp = odp_timer_pool_create("timer_pool0", &tparam); + tp = odp_timer_pool_create(NULL, &tparam); if (tp == ODP_TIMER_POOL_INVALID) CU_FAIL_FATAL("Timer pool create failed");
commit 6388e398998426c4b18cde893131d54630879bdf Author: Matias Elo matias.elo@nokia.com Date: Wed Dec 21 13:27:23 2016 +0200
api: move ODP_*_NAME_LEN definitions from API to implementation
Enables the implementations to choose the define values.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/pool.h b/include/odp/api/spec/pool.h index af2829b..c0de195 100644 --- a/include/odp/api/spec/pool.h +++ b/include/odp/api/spec/pool.h @@ -36,8 +36,10 @@ extern "C" { * Invalid pool */
-/** Maximum pool name length in chars including null char */ -#define ODP_POOL_NAME_LEN 32 +/** + * @def ODP_POOL_NAME_LEN + * Maximum pool name length in chars including null char + */
/** * Pool capabilities diff --git a/include/odp/api/spec/shared_memory.h b/include/odp/api/spec/shared_memory.h index 074c883..1a9c129 100644 --- a/include/odp/api/spec/shared_memory.h +++ b/include/odp/api/spec/shared_memory.h @@ -40,8 +40,10 @@ extern "C" { * Synonym for buffer pool use */
-/** Maximum shared memory block name length in chars including null char */ -#define ODP_SHM_NAME_LEN 32 +/** + * @def ODP_SHM_NAME_LEN + * Maximum shared memory block name length in chars including null char + */
/* * Shared memory flags: diff --git a/include/odp/api/spec/timer.h b/include/odp/api/spec/timer.h index 46a4369..75f9db9 100644 --- a/include/odp/api/spec/timer.h +++ b/include/odp/api/spec/timer.h @@ -90,8 +90,10 @@ typedef enum { ODP_TIMER_NOEVENT = -3 } odp_timer_set_t;
-/** Maximum timer pool name length in chars including null char */ -#define ODP_TIMER_POOL_NAME_LEN 32 +/** + * @def ODP_TIMER_POOL_NAME_LEN + * Maximum timer pool name length in chars including null char + */
/** Timer pool parameters * Timer pool parameters are used when creating and querying timer pools. diff --git a/platform/linux-generic/include/odp/api/plat/pool_types.h b/platform/linux-generic/include/odp/api/plat/pool_types.h index 4e39de5..6baff09 100644 --- a/platform/linux-generic/include/odp/api/plat/pool_types.h +++ b/platform/linux-generic/include/odp/api/plat/pool_types.h @@ -30,6 +30,8 @@ typedef ODP_HANDLE_T(odp_pool_t);
#define ODP_POOL_INVALID _odp_cast_scalar(odp_pool_t, 0xffffffff)
+#define ODP_POOL_NAME_LEN 32 + /** * Pool type */ diff --git a/platform/linux-generic/include/odp/api/plat/shared_memory_types.h b/platform/linux-generic/include/odp/api/plat/shared_memory_types.h index 4d8bbcc..afa0bf9 100644 --- a/platform/linux-generic/include/odp/api/plat/shared_memory_types.h +++ b/platform/linux-generic/include/odp/api/plat/shared_memory_types.h @@ -31,6 +31,8 @@ typedef ODP_HANDLE_T(odp_shm_t); #define ODP_SHM_INVALID _odp_cast_scalar(odp_shm_t, 0) #define ODP_SHM_NULL ODP_SHM_INVALID
+#define ODP_SHM_NAME_LEN 32 + /** Get printable format of odp_shm_t */ static inline uint64_t odp_shm_to_u64(odp_shm_t hdl) { diff --git a/platform/linux-generic/include/odp/api/plat/timer_types.h b/platform/linux-generic/include/odp/api/plat/timer_types.h index 68d6f6f..8821bed 100644 --- a/platform/linux-generic/include/odp/api/plat/timer_types.h +++ b/platform/linux-generic/include/odp/api/plat/timer_types.h @@ -30,6 +30,8 @@ typedef struct odp_timer_pool_s *odp_timer_pool_t;
#define ODP_TIMER_POOL_INVALID NULL
+#define ODP_TIMER_POOL_NAME_LEN 32 + typedef ODP_HANDLE_T(odp_timer_t);
#define ODP_TIMER_INVALID _odp_cast_scalar(odp_timer_t, 0xffffffff)
commit ca3b3950cc901d0e8db1f2bb9961c3bac9491c88 Author: Matias Elo matias.elo@nokia.com Date: Wed Dec 21 13:27:21 2016 +0200
api: unify ODP_*_NAME_LEN specifications
Unify name length definitions to always include terminating null character.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/classification.h b/include/odp/api/spec/classification.h index 0e442c7..0e1addd 100644 --- a/include/odp/api/spec/classification.h +++ b/include/odp/api/spec/classification.h @@ -44,7 +44,7 @@ extern "C" {
/** * @def ODP_COS_NAME_LEN - * Maximum ClassOfService name length in chars + * Maximum ClassOfService name length in chars including null char */
/** diff --git a/include/odp/api/spec/pool.h b/include/odp/api/spec/pool.h index 041f4af..af2829b 100644 --- a/include/odp/api/spec/pool.h +++ b/include/odp/api/spec/pool.h @@ -36,7 +36,7 @@ extern "C" { * Invalid pool */
-/** Maximum queue name length in chars */ +/** Maximum pool name length in chars including null char */ #define ODP_POOL_NAME_LEN 32
/** diff --git a/include/odp/api/spec/queue.h b/include/odp/api/spec/queue.h index b0c5e31..7972fea 100644 --- a/include/odp/api/spec/queue.h +++ b/include/odp/api/spec/queue.h @@ -44,7 +44,7 @@ extern "C" {
/** * @def ODP_QUEUE_NAME_LEN - * Maximum queue name length in chars + * Maximum queue name length in chars including null char */
/** diff --git a/include/odp/api/spec/schedule.h b/include/odp/api/spec/schedule.h index f976a4c..8244746 100644 --- a/include/odp/api/spec/schedule.h +++ b/include/odp/api/spec/schedule.h @@ -42,7 +42,7 @@ extern "C" {
/** * @def ODP_SCHED_GROUP_NAME_LEN - * Maximum schedule group name length in chars + * Maximum schedule group name length in chars including null char */
/** diff --git a/include/odp/api/spec/shared_memory.h b/include/odp/api/spec/shared_memory.h index 885751d..074c883 100644 --- a/include/odp/api/spec/shared_memory.h +++ b/include/odp/api/spec/shared_memory.h @@ -40,7 +40,7 @@ extern "C" { * Synonym for buffer pool use */
-/** Maximum shared memory block name length in chars */ +/** Maximum shared memory block name length in chars including null char */ #define ODP_SHM_NAME_LEN 32
/* diff --git a/include/odp/api/spec/timer.h b/include/odp/api/spec/timer.h index 49221c4..46a4369 100644 --- a/include/odp/api/spec/timer.h +++ b/include/odp/api/spec/timer.h @@ -90,7 +90,7 @@ typedef enum { ODP_TIMER_NOEVENT = -3 } odp_timer_set_t;
-/** Maximum timer pool name length in chars (including null char) */ +/** Maximum timer pool name length in chars including null char */ #define ODP_TIMER_POOL_NAME_LEN 32
/** Timer pool parameters
commit dd3031076b80cf6d8b9df6024c44555c77b150fb Author: Bill Fischofer bill.fischofer@linaro.org Date: Wed Dec 28 14:33:12 2016 -0600
api: pktio: pktio documentation typo correction
Signed-off-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/packet_io.h b/include/odp/api/spec/packet_io.h index 2dabfcc..85cd6d1 100644 --- a/include/odp/api/spec/packet_io.h +++ b/include/odp/api/spec/packet_io.h @@ -189,7 +189,7 @@ typedef struct odp_pktin_queue_param_t {
/** Number of input queues to be created * - * When classifier is enabled in odp_ipktin_queue_config() this + * When classifier is enabled in odp_pktin_queue_config() this * value is ignored, otherwise at least one queue is required. * More than one input queues require flow hashing configured. * The maximum value is defined by pktio capability 'max_input_queues'.
commit dfeba061509fb1451351ffb168a458e7d9ae4126 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 22 16:33:08 2016 +0200
linux-gen: config: increase max num of segments
Higher segment count enables optimizations (no data copy) on operations that modifying packet length.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_config_internal.h b/platform/linux-generic/include/odp_config_internal.h index e89a6a3..e7d84c9 100644 --- a/platform/linux-generic/include/odp_config_internal.h +++ b/platform/linux-generic/include/odp_config_internal.h @@ -75,7 +75,7 @@ extern "C" { /* * Maximum number of segments per packet */ -#define CONFIG_PACKET_MAX_SEGS 2 +#define CONFIG_PACKET_MAX_SEGS 6
/* * Maximum packet segment size including head- and tailrooms
commit f20066c7949466430186cce217589910fa75fd61 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 22 16:33:07 2016 +0200
validation: packet: add new concat and extend tests
Added new tests for better packet concat and extend test coverage. Both small and large data length are needed to create various segmentation scenarios.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/packet/packet.c b/test/common_plat/validation/api/packet/packet.c index 25252a6..cf11c01 100644 --- a/test/common_plat/validation/api/packet/packet.c +++ b/test/common_plat/validation/api/packet/packet.c @@ -54,6 +54,38 @@ static void _packet_compare_data(odp_packet_t pkt1, odp_packet_t pkt2) } }
+static int fill_data_forward(odp_packet_t pkt, uint32_t offset, uint32_t len, + uint32_t *cur_data) +{ + uint8_t buf[len]; + uint32_t i, data; + + data = *cur_data; + + for (i = 0; i < len; i++) + buf[i] = data++; + + *cur_data = data; + + return odp_packet_copy_from_mem(pkt, offset, len, buf); +} + +static int fill_data_backward(odp_packet_t pkt, uint32_t offset, uint32_t len, + uint32_t *cur_data) +{ + uint8_t buf[len]; + uint32_t i, data; + + data = *cur_data; + + for (i = 0; i < len; i++) + buf[len - i - 1] = data++; + + *cur_data = data; + + return odp_packet_copy_from_mem(pkt, offset, len, buf); +} + int packet_suite_init(void) { odp_pool_param_t params; @@ -1289,6 +1321,459 @@ void packet_test_concatsplit(void) odp_packet_free(pkt); }
+void packet_test_concat_small(void) +{ + odp_pool_capability_t capa; + odp_pool_t pool; + odp_pool_param_t param; + odp_packet_t pkt, pkt2; + int ret; + uint8_t *data; + uint32_t i; + uint32_t len = 32000; + uint8_t buf[len]; + + CU_ASSERT_FATAL(odp_pool_capability(&capa) == 0); + + if (capa.pkt.max_len && capa.pkt.max_len < len) + len = capa.pkt.max_len; + + odp_pool_param_init(¶m); + + param.type = ODP_POOL_PACKET; + param.pkt.len = len; + param.pkt.num = 100; + + pool = odp_pool_create("packet_pool_concat", ¶m); + CU_ASSERT(packet_pool != ODP_POOL_INVALID); + + pkt = odp_packet_alloc(pool, 1); + CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); + + data = odp_packet_data(pkt); + *data = 0; + + for (i = 0; i < len - 1; i++) { + pkt2 = odp_packet_alloc(pool, 1); + CU_ASSERT_FATAL(pkt2 != ODP_PACKET_INVALID); + + data = odp_packet_data(pkt2); + *data = i + 1; + + ret = odp_packet_concat(&pkt, pkt2); + CU_ASSERT(ret >= 0); + + if (ret < 0) { + odp_packet_free(pkt2); + break; + } + } + + CU_ASSERT(odp_packet_len(pkt) == len); + + len = odp_packet_len(pkt); + + memset(buf, 0, len); + CU_ASSERT(odp_packet_copy_to_mem(pkt, 0, len, buf) == 0); + + for (i = 0; i < len; i++) + CU_ASSERT(buf[i] == (i % 256)); + + odp_packet_free(pkt); + + CU_ASSERT(odp_pool_destroy(pool) == 0); +} + +void packet_test_concat_extend_trunc(void) +{ + odp_pool_capability_t capa; + odp_pool_t pool; + odp_pool_param_t param; + odp_packet_t pkt, pkt2; + int i, ret; + uint32_t alloc_len, ext_len, trunc_len, cur_len; + uint32_t len = 1900; + + CU_ASSERT_FATAL(odp_pool_capability(&capa) == 0); + + if (capa.pkt.max_len && capa.pkt.max_len < len) + len = capa.pkt.max_len; + + alloc_len = len / 8; + ext_len = len / 4; + trunc_len = len / 3; + + odp_pool_param_init(¶m); + + param.type = ODP_POOL_PACKET; + param.pkt.len = len; + param.pkt.num = 100; + + pool = odp_pool_create("packet_pool_concat", ¶m); + CU_ASSERT(packet_pool != ODP_POOL_INVALID); + + pkt = odp_packet_alloc(pool, alloc_len); + CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); + + cur_len = odp_packet_len(pkt); + + for (i = 0; i < 2; i++) { + pkt2 = odp_packet_alloc(pool, alloc_len); + CU_ASSERT_FATAL(pkt2 != ODP_PACKET_INVALID); + + ret = odp_packet_concat(&pkt, pkt2); + CU_ASSERT(ret >= 0); + + if (ret < 0) + odp_packet_free(pkt2); + + CU_ASSERT(odp_packet_len(pkt) == (cur_len + alloc_len)); + cur_len = odp_packet_len(pkt); + } + + ret = odp_packet_extend_tail(&pkt, ext_len, NULL, NULL); + CU_ASSERT(ret >= 0); + + CU_ASSERT(odp_packet_len(pkt) == (cur_len + ext_len)); + cur_len = odp_packet_len(pkt); + + ret = odp_packet_extend_head(&pkt, ext_len, NULL, NULL); + CU_ASSERT(ret >= 0); + + CU_ASSERT(odp_packet_len(pkt) == (cur_len + ext_len)); + cur_len = odp_packet_len(pkt); + + pkt2 = odp_packet_alloc(pool, alloc_len); + CU_ASSERT_FATAL(pkt2 != ODP_PACKET_INVALID); + + ret = odp_packet_concat(&pkt, pkt2); + CU_ASSERT(ret >= 0); + + if (ret < 0) + odp_packet_free(pkt2); + + CU_ASSERT(odp_packet_len(pkt) == (cur_len + alloc_len)); + cur_len = odp_packet_len(pkt); + + ret = odp_packet_trunc_head(&pkt, trunc_len, NULL, NULL); + CU_ASSERT(ret >= 0); + + CU_ASSERT(odp_packet_len(pkt) == (cur_len - trunc_len)); + cur_len = odp_packet_len(pkt); + + ret = odp_packet_trunc_tail(&pkt, trunc_len, NULL, NULL); + CU_ASSERT(ret >= 0); + + CU_ASSERT(odp_packet_len(pkt) == (cur_len - trunc_len)); + cur_len = odp_packet_len(pkt); + + odp_packet_free(pkt); + + CU_ASSERT(odp_pool_destroy(pool) == 0); +} + +void packet_test_extend_small(void) +{ + odp_pool_capability_t capa; + odp_pool_t pool; + odp_pool_param_t param; + odp_packet_t pkt; + int ret, round; + uint8_t *data; + uint32_t i, seg_len; + int tail = 1; + uint32_t len = 32000; + uint8_t buf[len]; + + CU_ASSERT_FATAL(odp_pool_capability(&capa) == 0); + + if (capa.pkt.max_len && capa.pkt.max_len < len) + len = capa.pkt.max_len; + + odp_pool_param_init(¶m); + + param.type = ODP_POOL_PACKET; + param.pkt.len = len; + param.pkt.num = 100; + + pool = odp_pool_create("packet_pool_extend", ¶m); + CU_ASSERT(packet_pool != ODP_POOL_INVALID); + + for (round = 0; round < 2; round++) { + pkt = odp_packet_alloc(pool, 1); + CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); + + data = odp_packet_data(pkt); + *data = 0; + + for (i = 0; i < len - 1; i++) { + if (tail) { + ret = odp_packet_extend_tail(&pkt, 1, + (void **)&data, + &seg_len); + CU_ASSERT(ret >= 0); + } else { + ret = odp_packet_extend_head(&pkt, 1, + (void **)&data, + &seg_len); + CU_ASSERT(ret >= 0); + } + + if (ret < 0) + break; + + if (tail) { + /* assert needs brackets */ + CU_ASSERT(seg_len == 1); + } else { + CU_ASSERT(seg_len > 0); + } + + *data = i + 1; + } + + CU_ASSERT(odp_packet_len(pkt) == len); + + len = odp_packet_len(pkt); + + memset(buf, 0, len); + CU_ASSERT(odp_packet_copy_to_mem(pkt, 0, len, buf) == 0); + + for (i = 0; i < len; i++) { + if (tail) { + /* assert needs brackets */ + CU_ASSERT(buf[i] == (i % 256)); + } else { + CU_ASSERT(buf[len - 1 - i] == (i % 256)); + } + } + + odp_packet_free(pkt); + + tail = 0; + } + + CU_ASSERT(odp_pool_destroy(pool) == 0); +} + +void packet_test_extend_large(void) +{ + odp_pool_capability_t capa; + odp_pool_t pool; + odp_pool_param_t param; + odp_packet_t pkt; + int ret, round; + uint8_t *data; + uint32_t i, seg_len, ext_len, cur_len, cur_data; + int tail = 1; + int num_div = 16; + int div = 1; + uint32_t len = 32000; + uint8_t buf[len]; + + CU_ASSERT_FATAL(odp_pool_capability(&capa) == 0); + + if (capa.pkt.max_len && capa.pkt.max_len < len) + len = capa.pkt.max_len; + + odp_pool_param_init(¶m); + + param.type = ODP_POOL_PACKET; + param.pkt.len = len; + param.pkt.num = 100; + + pool = odp_pool_create("packet_pool_extend", ¶m); + CU_ASSERT(packet_pool != ODP_POOL_INVALID); + + for (round = 0; round < 2 * num_div; round++) { + ext_len = len / div; + cur_len = ext_len; + + div++; + if (div > num_div) { + /* test extend head */ + div = 1; + tail = 0; + } + + pkt = odp_packet_alloc(pool, ext_len); + CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); + + cur_data = 0; + + if (tail) { + ret = fill_data_forward(pkt, 0, ext_len, &cur_data); + CU_ASSERT(ret == 0); + } else { + ret = fill_data_backward(pkt, 0, ext_len, &cur_data); + CU_ASSERT(ret == 0); + } + + while (cur_len < len) { + if ((len - cur_len) < ext_len) + ext_len = len - cur_len; + + if (tail) { + ret = odp_packet_extend_tail(&pkt, ext_len, + (void **)&data, + &seg_len); + CU_ASSERT(ret >= 0); + } else { + ret = odp_packet_extend_head(&pkt, ext_len, + (void **)&data, + &seg_len); + CU_ASSERT(ret >= 0); + } + + if (ret < 0) + break; + + if (tail) { + /* assert needs brackets */ + CU_ASSERT((seg_len > 0) && + (seg_len <= ext_len)); + ret = fill_data_forward(pkt, cur_len, ext_len, + &cur_data); + CU_ASSERT(ret == 0); + } else { + CU_ASSERT(seg_len > 0); + CU_ASSERT(data == odp_packet_data(pkt)); + ret = fill_data_backward(pkt, 0, ext_len, + &cur_data); + CU_ASSERT(ret == 0); + } + + cur_len += ext_len; + } + + CU_ASSERT(odp_packet_len(pkt) == len); + + len = odp_packet_len(pkt); + + memset(buf, 0, len); + CU_ASSERT(odp_packet_copy_to_mem(pkt, 0, len, buf) == 0); + + for (i = 0; i < len; i++) { + if (tail) { + /* assert needs brackets */ + CU_ASSERT(buf[i] == (i % 256)); + } else { + CU_ASSERT(buf[len - 1 - i] == (i % 256)); + } + } + + odp_packet_free(pkt); + } + + CU_ASSERT(odp_pool_destroy(pool) == 0); +} + +void packet_test_extend_mix(void) +{ + odp_pool_capability_t capa; + odp_pool_t pool; + odp_pool_param_t param; + odp_packet_t pkt; + int ret, round; + uint8_t *data; + uint32_t i, seg_len, ext_len, cur_len, cur_data; + int small_count; + int tail = 1; + uint32_t len = 32000; + uint8_t buf[len]; + + CU_ASSERT_FATAL(odp_pool_capability(&capa) == 0); + + if (capa.pkt.max_len && capa.pkt.max_len < len) + len = capa.pkt.max_len; + + odp_pool_param_init(¶m); + + param.type = ODP_POOL_PACKET; + param.pkt.len = len; + param.pkt.num = 100; + + pool = odp_pool_create("packet_pool_extend", ¶m); + CU_ASSERT(packet_pool != ODP_POOL_INVALID); + + for (round = 0; round < 2; round++) { + small_count = 30; + ext_len = len / 10; + cur_len = ext_len; + + pkt = odp_packet_alloc(pool, ext_len); + CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); + + cur_data = 0; + + if (tail) { + ret = fill_data_forward(pkt, 0, ext_len, &cur_data); + CU_ASSERT(ret == 0); + } else { + ret = fill_data_backward(pkt, 0, ext_len, &cur_data); + CU_ASSERT(ret == 0); + } + + while (cur_len < len) { + if (small_count) { + small_count--; + ext_len = len / 100; + } else { + ext_len = len / 4; + } + + if ((len - cur_len) < ext_len) + ext_len = len - cur_len; + + if (tail) { + ret = odp_packet_extend_tail(&pkt, ext_len, + (void **)&data, + &seg_len); + CU_ASSERT(ret >= 0); + CU_ASSERT((seg_len > 0) && + (seg_len <= ext_len)); + ret = fill_data_forward(pkt, cur_len, ext_len, + &cur_data); + CU_ASSERT(ret == 0); + } else { + ret = odp_packet_extend_head(&pkt, ext_len, + (void **)&data, + &seg_len); + CU_ASSERT(ret >= 0); + CU_ASSERT(seg_len > 0); + CU_ASSERT(data == odp_packet_data(pkt)); + ret = fill_data_backward(pkt, 0, ext_len, + &cur_data); + CU_ASSERT(ret == 0); + } + + cur_len += ext_len; + } + + CU_ASSERT(odp_packet_len(pkt) == len); + + len = odp_packet_len(pkt); + + memset(buf, 0, len); + CU_ASSERT(odp_packet_copy_to_mem(pkt, 0, len, buf) == 0); + + for (i = 0; i < len; i++) { + if (tail) { + /* assert needs brackets */ + CU_ASSERT(buf[i] == (i % 256)); + } else { + CU_ASSERT(buf[len - 1 - i] == (i % 256)); + } + } + + odp_packet_free(pkt); + + tail = 0; + } + + CU_ASSERT(odp_pool_destroy(pool) == 0); +} + void packet_test_align(void) { odp_packet_t pkt; @@ -1414,6 +1899,11 @@ odp_testinfo_t packet_suite[] = { ODP_TEST_INFO(packet_test_copy), ODP_TEST_INFO(packet_test_copydata), ODP_TEST_INFO(packet_test_concatsplit), + ODP_TEST_INFO(packet_test_concat_small), + ODP_TEST_INFO(packet_test_concat_extend_trunc), + ODP_TEST_INFO(packet_test_extend_small), + ODP_TEST_INFO(packet_test_extend_large), + ODP_TEST_INFO(packet_test_extend_mix), ODP_TEST_INFO(packet_test_align), ODP_TEST_INFO(packet_test_offset), ODP_TEST_INFO_NULL, diff --git a/test/common_plat/validation/api/packet/packet.h b/test/common_plat/validation/api/packet/packet.h index 10a377c..9bc3d63 100644 --- a/test/common_plat/validation/api/packet/packet.h +++ b/test/common_plat/validation/api/packet/packet.h @@ -30,6 +30,11 @@ void packet_test_add_rem_data(void); void packet_test_copy(void); void packet_test_copydata(void); void packet_test_concatsplit(void); +void packet_test_concat_small(void); +void packet_test_concat_extend_trunc(void); +void packet_test_extend_small(void); +void packet_test_extend_large(void); +void packet_test_extend_mix(void); void packet_test_align(void); void packet_test_offset(void);
commit 9fe4043f4cf402a4695e8c0e4887e87da60fcb33 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 22 16:33:06 2016 +0200
linux-gen: packet: optimize concat
Optimized concat operation to avoid packet copy when destination packet has room to link source packet segments. Since concat uses extend tail it was also modified to handle variable segment sizes.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 4cc51d3..6149290 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -75,7 +75,7 @@ struct odp_buffer_hdr_t { uint32_t size;
/* Initial buffer data pointer and length */ - void *base_data; + uint8_t *base_data; uint32_t base_len; uint8_t *buf_end;
diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 2d9e3e6..58b6f32 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -74,6 +74,15 @@ static inline void *packet_tail(odp_packet_hdr_t *pkt_hdr) return pkt_hdr->buf_hdr.seg[last].data + seg_len; }
+static inline uint32_t seg_headroom(odp_packet_hdr_t *pkt_hdr, int seg) +{ + odp_buffer_hdr_t *hdr = pkt_hdr->buf_hdr.seg[seg].hdr; + uint8_t *base = hdr->base_data; + uint8_t *head = pkt_hdr->buf_hdr.seg[seg].data; + + return CONFIG_PACKET_HEADROOM + (head - base); +} + static inline uint32_t seg_tailroom(odp_packet_hdr_t *pkt_hdr, int seg) { uint32_t seg_len = pkt_hdr->buf_hdr.seg[seg].len; @@ -297,7 +306,7 @@ static inline int num_segments(uint32_t len) return num; }
-static inline void copy_all_segs(odp_packet_hdr_t *to, odp_packet_hdr_t *from) +static inline void add_all_segs(odp_packet_hdr_t *to, odp_packet_hdr_t *from) { int i; int n = to->buf_hdr.segcount; @@ -313,52 +322,53 @@ static inline void copy_all_segs(odp_packet_hdr_t *to, odp_packet_hdr_t *from) }
static inline void copy_num_segs(odp_packet_hdr_t *to, odp_packet_hdr_t *from, - int num) + int first, int num) { int i;
for (i = 0; i < num; i++) { - to->buf_hdr.seg[i].hdr = from->buf_hdr.seg[num + i].hdr; - to->buf_hdr.seg[i].data = from->buf_hdr.seg[num + i].data; - to->buf_hdr.seg[i].len = from->buf_hdr.seg[num + i].len; + to->buf_hdr.seg[i].hdr = from->buf_hdr.seg[first + i].hdr; + to->buf_hdr.seg[i].data = from->buf_hdr.seg[first + i].data; + to->buf_hdr.seg[i].len = from->buf_hdr.seg[first + i].len; }
to->buf_hdr.segcount = num; }
-static inline odp_packet_hdr_t *add_segments(odp_packet_hdr_t *pkt_hdr, - uint32_t len, int head) +static inline odp_packet_hdr_t *alloc_segments(pool_t *pool, int num) { - pool_t *pool = pool_entry_from_hdl(pkt_hdr->buf_hdr.pool_hdl); - odp_packet_hdr_t *new_hdr; - int num, ret; - uint32_t seg_len, offset; + odp_buffer_t buf[num]; + int ret;
- num = num_segments(len); + ret = buffer_alloc_multi(pool, buf, NULL, num); + if (odp_unlikely(ret != num)) { + if (ret > 0) + buffer_free_multi(buf, ret);
- if ((pkt_hdr->buf_hdr.segcount + num) > CONFIG_PACKET_MAX_SEGS) return NULL; + }
- { - odp_buffer_t buf[num]; + return init_segments(buf, num); +}
- ret = buffer_alloc_multi(pool, buf, NULL, num); - if (odp_unlikely(ret != num)) { - if (ret > 0) - buffer_free_multi(buf, ret); +static inline odp_packet_hdr_t *add_segments(odp_packet_hdr_t *pkt_hdr, + pool_t *pool, uint32_t len, + int num, int head) +{ + odp_packet_hdr_t *new_hdr; + uint32_t seg_len, offset;
- return NULL; - } + new_hdr = alloc_segments(pool, num);
- new_hdr = init_segments(buf, num); - } + if (new_hdr == NULL) + return NULL;
seg_len = len - ((num - 1) * pool->max_seg_len); offset = pool->max_seg_len - seg_len;
if (head) { /* add into the head*/ - copy_all_segs(new_hdr, pkt_hdr); + add_all_segs(new_hdr, pkt_hdr);
/* adjust first segment length */ new_hdr->buf_hdr.seg[0].data += offset; @@ -374,7 +384,7 @@ static inline odp_packet_hdr_t *add_segments(odp_packet_hdr_t *pkt_hdr, int last;
/* add into the tail */ - copy_all_segs(pkt_hdr, new_hdr); + add_all_segs(pkt_hdr, new_hdr);
/* adjust last segment length */ last = packet_last_seg(pkt_hdr); @@ -387,49 +397,60 @@ static inline odp_packet_hdr_t *add_segments(odp_packet_hdr_t *pkt_hdr, return pkt_hdr; }
+static inline void free_bufs(odp_packet_hdr_t *pkt_hdr, int first, int num) +{ + int i; + odp_buffer_t buf[num]; + + for (i = 0; i < num; i++) + buf[i] = buffer_handle(pkt_hdr->buf_hdr.seg[first + i].hdr); + + buffer_free_multi(buf, num); +} + static inline odp_packet_hdr_t *free_segments(odp_packet_hdr_t *pkt_hdr, int num, uint32_t free_len, uint32_t pull_len, int head) { - int i; - odp_buffer_t buf[num]; - int n = pkt_hdr->buf_hdr.segcount - num; + int num_remain = pkt_hdr->buf_hdr.segcount - num;
if (head) { odp_packet_hdr_t *new_hdr; + int i; + odp_buffer_t buf[num];
for (i = 0; i < num; i++) buf[i] = buffer_handle(pkt_hdr->buf_hdr.seg[i].hdr);
/* First remaining segment is the new packet descriptor */ new_hdr = pkt_hdr->buf_hdr.seg[num].hdr; - copy_num_segs(new_hdr, pkt_hdr, n); + + copy_num_segs(new_hdr, pkt_hdr, num, num_remain); packet_seg_copy_md(new_hdr, pkt_hdr);
/* Tailroom not changed */ new_hdr->tailroom = pkt_hdr->tailroom; - /* No headroom in non-first segments */ - new_hdr->headroom = 0; + new_hdr->headroom = seg_headroom(new_hdr, 0); new_hdr->frame_len = pkt_hdr->frame_len - free_len;
pull_head(new_hdr, pull_len);
pkt_hdr = new_hdr; + + buffer_free_multi(buf, num); } else { - for (i = 0; i < num; i++) - buf[i] = buffer_handle(pkt_hdr->buf_hdr.seg[n + i].hdr); + /* Free last 'num' bufs */ + free_bufs(pkt_hdr, num_remain, num);
/* Head segment remains, no need to copy or update majority * of the metadata. */ - pkt_hdr->buf_hdr.segcount = n; + pkt_hdr->buf_hdr.segcount = num_remain; pkt_hdr->frame_len -= free_len; - pkt_hdr->tailroom = seg_tailroom(pkt_hdr, n - 1); + pkt_hdr->tailroom = seg_tailroom(pkt_hdr, num_remain - 1);
pull_tail(pkt_hdr, pull_len); }
- buffer_free_multi(buf, num); - return pkt_hdr; }
@@ -530,19 +551,10 @@ void odp_packet_free(odp_packet_t pkt) odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); int num_seg = pkt_hdr->buf_hdr.segcount;
- if (odp_likely(CONFIG_PACKET_MAX_SEGS == 1 || num_seg == 1)) { + if (odp_likely(CONFIG_PACKET_MAX_SEGS == 1 || num_seg == 1)) buffer_free_multi((odp_buffer_t *)&pkt, 1); - } else { - odp_buffer_t buf[num_seg]; - int i; - - buf[0] = (odp_buffer_t)pkt; - - for (i = 1; i < num_seg; i++) - buf[i] = buffer_handle(pkt_hdr->buf_hdr.seg[i].hdr); - - buffer_free_multi(buf, num_seg); - } + else + free_bufs(pkt_hdr, 0, num_seg); }
void odp_packet_free_multi(const odp_packet_t pkt[], int num) @@ -676,25 +688,277 @@ void *odp_packet_push_head(odp_packet_t pkt, uint32_t len) return packet_data(pkt_hdr); }
+static inline uint32_t pack_seg_head(odp_packet_hdr_t *pkt_hdr, int seg) +{ + odp_buffer_hdr_t *hdr = pkt_hdr->buf_hdr.seg[seg].hdr; + uint32_t len = pkt_hdr->buf_hdr.seg[seg].len; + uint8_t *src = pkt_hdr->buf_hdr.seg[seg].data; + uint8_t *dst = hdr->base_data; + + if (dst != src) { + memmove(dst, src, len); + pkt_hdr->buf_hdr.seg[seg].data = dst; + } + + return len; +} + +static inline uint32_t pack_seg_tail(odp_packet_hdr_t *pkt_hdr, int seg) +{ + odp_buffer_hdr_t *hdr = pkt_hdr->buf_hdr.seg[seg].hdr; + uint32_t len = pkt_hdr->buf_hdr.seg[seg].len; + uint8_t *src = pkt_hdr->buf_hdr.seg[seg].data; + uint8_t *dst = hdr->base_data + hdr->base_len - len; + + if (dst != src) { + memmove(dst, src, len); + pkt_hdr->buf_hdr.seg[seg].data = dst; + } + + return len; +} + +static inline uint32_t fill_seg_head(odp_packet_hdr_t *pkt_hdr, int dst_seg, + int src_seg, uint32_t max_len) +{ + uint32_t len = pkt_hdr->buf_hdr.seg[src_seg].len; + uint8_t *src = pkt_hdr->buf_hdr.seg[src_seg].data; + uint32_t offset = pkt_hdr->buf_hdr.seg[dst_seg].len; + uint8_t *dst = pkt_hdr->buf_hdr.seg[dst_seg].data + offset; + + if (len > max_len) + len = max_len; + + memmove(dst, src, len); + + pkt_hdr->buf_hdr.seg[dst_seg].len += len; + pkt_hdr->buf_hdr.seg[src_seg].len -= len; + pkt_hdr->buf_hdr.seg[src_seg].data += len; + + if (pkt_hdr->buf_hdr.seg[src_seg].len == 0) { + odp_buffer_hdr_t *hdr = pkt_hdr->buf_hdr.seg[src_seg].hdr; + + pkt_hdr->buf_hdr.seg[src_seg].data = hdr->base_data; + } + + return len; +} + +static inline uint32_t fill_seg_tail(odp_packet_hdr_t *pkt_hdr, int dst_seg, + int src_seg, uint32_t max_len) +{ + uint32_t src_len = pkt_hdr->buf_hdr.seg[src_seg].len; + uint8_t *src = pkt_hdr->buf_hdr.seg[src_seg].data; + uint8_t *dst = pkt_hdr->buf_hdr.seg[dst_seg].data; + uint32_t len = src_len; + + if (len > max_len) + len = max_len; + + src += src_len - len; + dst -= len; + + memmove(dst, src, len); + + pkt_hdr->buf_hdr.seg[dst_seg].data -= len; + pkt_hdr->buf_hdr.seg[dst_seg].len += len; + pkt_hdr->buf_hdr.seg[src_seg].len -= len; + + if (pkt_hdr->buf_hdr.seg[src_seg].len == 0) { + odp_buffer_hdr_t *hdr = pkt_hdr->buf_hdr.seg[src_seg].hdr; + + pkt_hdr->buf_hdr.seg[src_seg].data = hdr->base_data; + } + + return len; +} + +static inline int move_data_to_head(odp_packet_hdr_t *pkt_hdr, int segs) +{ + int dst_seg, src_seg; + uint32_t base_len, len, free_len; + uint32_t moved = 0; + + base_len = pkt_hdr->buf_hdr.base_len; + + for (dst_seg = 0; dst_seg < segs; dst_seg++) { + len = pack_seg_head(pkt_hdr, dst_seg); + moved += len; + + if (len == base_len) + continue; + + free_len = base_len - len; + + for (src_seg = dst_seg + 1; src_seg < segs; src_seg++) { + len = fill_seg_head(pkt_hdr, dst_seg, src_seg, + free_len); + moved += len; + + if (len == free_len) { + /* dst seg is full */ + break; + } + + /* src seg is empty */ + free_len -= len; + } + + if (moved == pkt_hdr->frame_len) + break; + } + + /* last segment which have data */ + return dst_seg; +} + +static inline int move_data_to_tail(odp_packet_hdr_t *pkt_hdr, int segs) +{ + int dst_seg, src_seg; + uint32_t base_len, len, free_len; + uint32_t moved = 0; + + base_len = pkt_hdr->buf_hdr.base_len; + + for (dst_seg = segs - 1; dst_seg >= 0; dst_seg--) { + len = pack_seg_tail(pkt_hdr, dst_seg); + moved += len; + + if (len == base_len) + continue; + + free_len = base_len - len; + + for (src_seg = dst_seg - 1; src_seg >= 0; src_seg--) { + len = fill_seg_tail(pkt_hdr, dst_seg, src_seg, + free_len); + moved += len; + + if (len == free_len) { + /* dst seg is full */ + break; + } + + /* src seg is empty */ + free_len -= len; + } + + if (moved == pkt_hdr->frame_len) + break; + } + + /* first segment which have data */ + return dst_seg; +} + +static inline void reset_seg(odp_packet_hdr_t *pkt_hdr, int first, int num) +{ + odp_buffer_hdr_t *hdr; + void *base; + int i; + uint32_t base_len = pkt_hdr->buf_hdr.base_len; + + for (i = first; i < first + num; i++) { + hdr = pkt_hdr->buf_hdr.seg[i].hdr; + base = hdr->base_data; + pkt_hdr->buf_hdr.seg[i].len = base_len; + pkt_hdr->buf_hdr.seg[i].data = base; + } +} + int odp_packet_extend_head(odp_packet_t *pkt, uint32_t len, void **data_ptr, uint32_t *seg_len) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(*pkt); - odp_packet_hdr_t *new_hdr; - uint32_t headroom = pkt_hdr->headroom; + uint32_t frame_len = pkt_hdr->frame_len; + uint32_t headroom = pkt_hdr->headroom; + int ret = 0;
if (len > headroom) { - push_head(pkt_hdr, headroom); - new_hdr = add_segments(pkt_hdr, len - headroom, 1); + pool_t *pool = pool_entry_from_hdl(pkt_hdr->buf_hdr.pool_hdl); + int num; + int segs;
- if (new_hdr == NULL) { - /* segment alloc failed, rollback changes */ - pull_head(pkt_hdr, headroom); + if (odp_unlikely((frame_len + len) > pool->max_len)) return -1; - }
- *pkt = packet_handle(new_hdr); - pkt_hdr = new_hdr; + num = num_segments(len - headroom); + segs = pkt_hdr->buf_hdr.segcount; + + if (odp_unlikely((segs + num) > CONFIG_PACKET_MAX_SEGS)) { + /* Cannot directly add new segments */ + odp_packet_hdr_t *new_hdr; + int new_segs = 0; + int free_segs = 0; + uint32_t base_len = pkt_hdr->buf_hdr.base_len; + uint32_t offset; + + num = num_segments(frame_len + len); + + if (num > segs) { + /* Allocate additional segments */ + new_segs = num - segs; + new_hdr = alloc_segments(pool, new_segs); + + if (new_hdr == NULL) + return -1; + + } else if (num < segs) { + free_segs = segs - num; + } + + /* Pack all data to packet tail */ + move_data_to_tail(pkt_hdr, segs); + reset_seg(pkt_hdr, 0, segs); + + if (new_segs) { + add_all_segs(new_hdr, pkt_hdr); + packet_seg_copy_md(new_hdr, pkt_hdr); + segs += new_segs; + + pkt_hdr = new_hdr; + *pkt = packet_handle(pkt_hdr); + } else if (free_segs) { + new_hdr = pkt_hdr->buf_hdr.seg[free_segs].hdr; + packet_seg_copy_md(new_hdr, pkt_hdr); + + /* Free extra segs */ + free_bufs(pkt_hdr, 0, free_segs); + + segs -= free_segs; + pkt_hdr = new_hdr; + *pkt = packet_handle(pkt_hdr); + } + + frame_len += len; + offset = (segs * base_len) - frame_len; + + pkt_hdr->buf_hdr.seg[0].data += offset; + pkt_hdr->buf_hdr.seg[0].len -= offset; + + pkt_hdr->buf_hdr.segcount = segs; + pkt_hdr->frame_len = frame_len; + pkt_hdr->headroom = offset + pool->headroom; + pkt_hdr->tailroom = pool->tailroom; + + /* Data was moved */ + ret = 1; + } else { + void *ptr; + + push_head(pkt_hdr, headroom); + ptr = add_segments(pkt_hdr, pool, len - headroom, + num, 1); + + if (ptr == NULL) { + /* segment alloc failed, rollback changes */ + pull_head(pkt_hdr, headroom); + return -1; + } + + *pkt = packet_handle(ptr); + pkt_hdr = ptr; + } } else { push_head(pkt_hdr, len); } @@ -705,7 +969,7 @@ int odp_packet_extend_head(odp_packet_t *pkt, uint32_t len, if (seg_len) *seg_len = packet_first_seg_len(pkt_hdr);
- return 0; + return ret; }
void *odp_packet_pull_head(odp_packet_t pkt, uint32_t len) @@ -769,30 +1033,96 @@ void *odp_packet_push_tail(odp_packet_t pkt, uint32_t len) }
int odp_packet_extend_tail(odp_packet_t *pkt, uint32_t len, - void **data_ptr, uint32_t *seg_len) + void **data_ptr, uint32_t *seg_len_out) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(*pkt); - void *ret; - uint32_t tailroom = pkt_hdr->tailroom; - uint32_t tail_off = pkt_hdr->frame_len; + uint32_t frame_len = pkt_hdr->frame_len; + uint32_t tailroom = pkt_hdr->tailroom; + uint32_t tail_off = frame_len; + int ret = 0;
if (len > tailroom) { - push_tail(pkt_hdr, tailroom); - ret = add_segments(pkt_hdr, len - tailroom, 0); + pool_t *pool = pool_entry_from_hdl(pkt_hdr->buf_hdr.pool_hdl); + int num; + int segs;
- if (ret == NULL) { - /* segment alloc failed, rollback changes */ - pull_tail(pkt_hdr, tailroom); + if (odp_unlikely((frame_len + len) > pool->max_len)) return -1; + + num = num_segments(len - tailroom); + segs = pkt_hdr->buf_hdr.segcount; + + if (odp_unlikely((segs + num) > CONFIG_PACKET_MAX_SEGS)) { + /* Cannot directly add new segments */ + odp_packet_hdr_t *new_hdr; + int new_segs = 0; + int free_segs = 0; + uint32_t base_len = pkt_hdr->buf_hdr.base_len; + uint32_t offset; + + num = num_segments(frame_len + len); + + if (num > segs) { + /* Allocate additional segments */ + new_segs = num - segs; + new_hdr = alloc_segments(pool, new_segs); + + if (new_hdr == NULL) + return -1; + + } else if (num < segs) { + free_segs = segs - num; + } + + /* Pack all data to packet head */ + move_data_to_head(pkt_hdr, segs); + reset_seg(pkt_hdr, 0, segs); + + if (new_segs) { + /* Add new segs */ + add_all_segs(pkt_hdr, new_hdr); + segs += new_segs; + } else if (free_segs) { + /* Free extra segs */ + free_bufs(pkt_hdr, segs - free_segs, free_segs); + + segs -= free_segs; + } + + frame_len += len; + offset = (segs * base_len) - frame_len; + + pkt_hdr->buf_hdr.seg[segs - 1].len -= offset; + + pkt_hdr->buf_hdr.segcount = segs; + pkt_hdr->frame_len = frame_len; + pkt_hdr->headroom = pool->headroom; + pkt_hdr->tailroom = offset + pool->tailroom; + + /* Data was moved */ + ret = 1; + } else { + void *ptr; + + push_tail(pkt_hdr, tailroom); + + ptr = add_segments(pkt_hdr, pool, len - tailroom, + num, 0); + + if (ptr == NULL) { + /* segment alloc failed, rollback changes */ + pull_tail(pkt_hdr, tailroom); + return -1; + } } } else { push_tail(pkt_hdr, len); }
if (data_ptr) - *data_ptr = packet_map(pkt_hdr, tail_off, seg_len, NULL); + *data_ptr = packet_map(pkt_hdr, tail_off, seg_len_out, NULL);
- return 0; + return ret; }
void *odp_packet_pull_tail(odp_packet_t pkt, uint32_t len) @@ -1199,19 +1529,38 @@ int odp_packet_align(odp_packet_t *pkt, uint32_t offset, uint32_t len,
int odp_packet_concat(odp_packet_t *dst, odp_packet_t src) { - uint32_t dst_len = odp_packet_len(*dst); - uint32_t src_len = odp_packet_len(src); - - if (odp_packet_extend_tail(dst, src_len, NULL, NULL) >= 0) { - (void)odp_packet_copy_from_pkt(*dst, dst_len, - src, 0, src_len); - if (src != *dst) + odp_packet_hdr_t *dst_hdr = odp_packet_hdr(*dst); + odp_packet_hdr_t *src_hdr = odp_packet_hdr(src); + int dst_segs = dst_hdr->buf_hdr.segcount; + int src_segs = src_hdr->buf_hdr.segcount; + odp_pool_t dst_pool = dst_hdr->buf_hdr.pool_hdl; + odp_pool_t src_pool = src_hdr->buf_hdr.pool_hdl; + uint32_t dst_len = dst_hdr->frame_len; + uint32_t src_len = src_hdr->frame_len; + + /* Do a copy if resulting packet would be out of segments or packets + * are from different pools. */ + if (odp_unlikely((dst_segs + src_segs) > CONFIG_PACKET_MAX_SEGS) || + odp_unlikely(dst_pool != src_pool)) { + if (odp_packet_extend_tail(dst, src_len, NULL, NULL) >= 0) { + (void)odp_packet_copy_from_pkt(*dst, dst_len, + src, 0, src_len); odp_packet_free(src);
- return 0; + /* Data was moved in memory */ + return 1; + } + + return -1; }
- return -1; + add_all_segs(dst_hdr, src_hdr); + + dst_hdr->frame_len = dst_len + src_len; + dst_hdr->tailroom = src_hdr->tailroom; + + /* Data was not moved in memory */ + return 0; }
int odp_packet_split(odp_packet_t *pkt, uint32_t len, odp_packet_t *tail)
commit 4d9de3d54c49ec69d2366a01ad5b2b987943a5c5 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 22 16:33:05 2016 +0200
validation: packet: concat-split test bug fix
Successful concat return values are 0 and >0.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/packet/packet.c b/test/common_plat/validation/api/packet/packet.c index 3ad00ed..25252a6 100644 --- a/test/common_plat/validation/api/packet/packet.c +++ b/test/common_plat/validation/api/packet/packet.c @@ -1232,7 +1232,7 @@ void packet_test_concatsplit(void) CU_ASSERT(pkt_len == odp_packet_len(pkt)); CU_ASSERT(pkt_len == odp_packet_len(pkt2));
- CU_ASSERT(odp_packet_concat(&pkt, pkt2) == 0); + CU_ASSERT(odp_packet_concat(&pkt, pkt2) >= 0); CU_ASSERT(odp_packet_len(pkt) == pkt_len * 2); _packet_compare_offset(pkt, 0, pkt, pkt_len, pkt_len);
@@ -1262,7 +1262,7 @@ void packet_test_concatsplit(void) _packet_compare_offset(splits[0], 0, segmented_test_packet, pkt_len / 2, odp_packet_len(splits[0]));
- CU_ASSERT(odp_packet_concat(&pkt, splits[0]) == 0); + CU_ASSERT(odp_packet_concat(&pkt, splits[0]) >= 0); _packet_compare_offset(pkt, 0, segmented_test_packet, 0, pkt_len / 2); _packet_compare_offset(pkt, pkt_len / 2, segmented_test_packet, pkt_len / 2, pkt_len / 2); @@ -1279,9 +1279,9 @@ void packet_test_concatsplit(void) CU_ASSERT(odp_packet_len(splits[0]) + odp_packet_len(splits[1]) + odp_packet_len(splits[2]) + odp_packet_len(pkt) == pkt_len);
- CU_ASSERT(odp_packet_concat(&pkt, splits[2]) == 0); - CU_ASSERT(odp_packet_concat(&pkt, splits[1]) == 0); - CU_ASSERT(odp_packet_concat(&pkt, splits[0]) == 0); + CU_ASSERT(odp_packet_concat(&pkt, splits[2]) >= 0); + CU_ASSERT(odp_packet_concat(&pkt, splits[1]) >= 0); + CU_ASSERT(odp_packet_concat(&pkt, splits[0]) >= 0);
CU_ASSERT(odp_packet_len(pkt) == odp_packet_len(segmented_test_packet)); _packet_compare_data(pkt, segmented_test_packet);
commit 970f7a4ae91932c49d1b9dc00bfa861f7f2a0197 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 22 16:33:04 2016 +0200
linux-gen: packet: improve packet print
Added segmentation and head-/tailroom information to packet print out.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 10fbded..2d9e3e6 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -1395,6 +1395,7 @@ int odp_packet_move_data(odp_packet_t pkt, uint32_t dst_offset,
void odp_packet_print(odp_packet_t pkt) { + odp_packet_seg_t seg; int max_len = 512; char str[max_len]; int len = 0; @@ -1421,6 +1422,25 @@ void odp_packet_print(odp_packet_t pkt) len += snprintf(&str[len], n - len, " input %" PRIu64 "\n", odp_pktio_to_u64(hdr->input)); + len += snprintf(&str[len], n - len, + " headroom %" PRIu32 "\n", + odp_packet_headroom(pkt)); + len += snprintf(&str[len], n - len, + " tailroom %" PRIu32 "\n", + odp_packet_tailroom(pkt)); + len += snprintf(&str[len], n - len, + " num_segs %i\n", odp_packet_num_segs(pkt)); + + seg = odp_packet_first_seg(pkt); + + while (seg != ODP_PACKET_SEG_INVALID) { + len += snprintf(&str[len], n - len, + " seg_len %" PRIu32 "\n", + odp_packet_seg_data_len(pkt, seg)); + + seg = odp_packet_next_seg(pkt, seg); + } + str[len] = '\0';
ODP_PRINT("\n%s\n", str);
commit 8ae51b0364e25e45eddb4cf2e175b269c0736436 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 22 16:33:03 2016 +0200
linux-gen: packet: fix bug in tailroom calculation
Tailroom is calculated from the end of the last segment, not from the first.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 0d3fd05..10fbded 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -74,6 +74,15 @@ static inline void *packet_tail(odp_packet_hdr_t *pkt_hdr) return pkt_hdr->buf_hdr.seg[last].data + seg_len; }
+static inline uint32_t seg_tailroom(odp_packet_hdr_t *pkt_hdr, int seg) +{ + uint32_t seg_len = pkt_hdr->buf_hdr.seg[seg].len; + odp_buffer_hdr_t *hdr = pkt_hdr->buf_hdr.seg[seg].hdr; + uint8_t *tail = pkt_hdr->buf_hdr.seg[seg].data + seg_len; + + return hdr->buf_end - tail; +} + static inline void push_head(odp_packet_hdr_t *pkt_hdr, uint32_t len) { pkt_hdr->headroom -= len; @@ -383,11 +392,12 @@ static inline odp_packet_hdr_t *free_segments(odp_packet_hdr_t *pkt_hdr, uint32_t pull_len, int head) { int i; - odp_packet_hdr_t *new_hdr; odp_buffer_t buf[num]; int n = pkt_hdr->buf_hdr.segcount - num;
if (head) { + odp_packet_hdr_t *new_hdr; + for (i = 0; i < num; i++) buf[i] = buffer_handle(pkt_hdr->buf_hdr.seg[i].hdr);
@@ -413,8 +423,7 @@ static inline odp_packet_hdr_t *free_segments(odp_packet_hdr_t *pkt_hdr, * of the metadata. */ pkt_hdr->buf_hdr.segcount = n; pkt_hdr->frame_len -= free_len; - pkt_hdr->tailroom = pkt_hdr->buf_hdr.buf_end - - (uint8_t *)packet_tail(pkt_hdr); + pkt_hdr->tailroom = seg_tailroom(pkt_hdr, n - 1);
pull_tail(pkt_hdr, pull_len); }
commit 76543549b422a53dde44de5900071554f65aa212 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 22 16:33:02 2016 +0200
api: packet: src and dst packet must not be the same
Concat and copy_from_pkt operations must be called with src and dst packet handles which refer to the same packet.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/packet.h b/include/odp/api/spec/packet.h index faf62e2..4a86eba 100644 --- a/include/odp/api/spec/packet.h +++ b/include/odp/api/spec/packet.h @@ -781,7 +781,8 @@ uint32_t odp_packet_seg_data_len(odp_packet_t pkt, odp_packet_seg_t seg); * Concatenate all packet data from 'src' packet into tail of 'dst' packet. * Operation preserves 'dst' packet metadata in the resulting packet, * while 'src' packet handle, metadata and old segment handles for both packets - * become invalid. + * become invalid. Source and destination packet handles must not refer to + * the same packet. * * A successful operation overwrites 'dst' packet handle with a new handle, * which application must use as the reference to the resulting packet @@ -928,6 +929,9 @@ int odp_packet_copy_from_mem(odp_packet_t pkt, uint32_t offset, * Copy 'len' bytes of data from 'src' packet to 'dst' packet. Copy starts from * the specified source and destination packet offsets. Copied areas * (offset ... offset + len) must not exceed their packet data lengths. + * Source and destination packet handles must not refer to the same packet (use + * odp_packet_copy_data() or odp_packet_move_data() for a single packet). + * * Packet is not modified on an error. * * @param dst Destination packet handle
commit 2c060a9d3b9f067f4cca2094be845e54392077ec Author: Balasubramanian Manoharan bala.manoharan@linaro.org Date: Fri Nov 4 23:04:30 2016 +0530
api: pktio: adds further definition for classification configuration
Updates classification configuration documentation.
Signed-off-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/packet_io.h b/include/odp/api/spec/packet_io.h index d46e405..2dabfcc 100644 --- a/include/odp/api/spec/packet_io.h +++ b/include/odp/api/spec/packet_io.h @@ -189,12 +189,11 @@ typedef struct odp_pktin_queue_param_t {
/** Number of input queues to be created * - * When classifier is enabled the number of queues may be zero - * (in odp_pktin_queue_config() step), otherwise at least one - * queue is required. More than one input queues require either flow - * hashing or classifier enabled. The maximum value is defined by - * pktio capability 'max_input_queues'. Queue type is defined by the - * input mode. The default value is 1. */ + * When classifier is enabled in odp_ipktin_queue_config() this + * value is ignored, otherwise at least one queue is required. + * More than one input queues require flow hashing configured. + * The maximum value is defined by pktio capability 'max_input_queues'. + * Queue type is defined by the input mode. The default value is 1. */ unsigned num_queues;
/** Queue parameters @@ -202,7 +201,9 @@ typedef struct odp_pktin_queue_param_t { * These are used for input queue creation in ODP_PKTIN_MODE_QUEUE * or ODP_PKTIN_MODE_SCHED modes. Scheduler parameters are considered * only in ODP_PKTIN_MODE_SCHED mode. Default values are defined in - * odp_queue_param_t documentation. */ + * odp_queue_param_t documentation. + * When classifier is enabled in odp_pktin_queue_config() this + * value is ignored. */ odp_queue_param_t queue_param;
} odp_pktin_queue_param_t; @@ -887,6 +888,8 @@ int odp_pktio_mac_addr(odp_pktio_t pktio, void *mac_addr, int size); * * @retval 0 on success * @retval <0 on failure + * + * @note The default_cos has to be unique per odp_pktio_t instance. */ int odp_pktio_default_cos_set(odp_pktio_t pktio, odp_cos_t default_cos);
commit b6e1fea7ef8a443579c8f197c96ca2acc7c0577d Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 13:04:23 2016 +0200
linux-gen: schedule_sp: use ring as priority queue
Improve scalability by replacing lock protected linked list with a ring. Schedule group supported was updated also, since ring does not support peek of the head item.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Yi He yi.he@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 76d1357..5150d28 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -13,9 +13,12 @@ #include <odp_debug_internal.h> #include <odp_align_internal.h> #include <odp_config_internal.h> +#include <odp_ring_internal.h>
+#define NUM_THREAD ODP_THREAD_COUNT_MAX #define NUM_QUEUE ODP_CONFIG_QUEUES #define NUM_PKTIO ODP_CONFIG_PKTIO_ENTRIES +#define NUM_ORDERED_LOCKS 1 #define NUM_PRIO 3 #define NUM_STATIC_GROUP 3 #define NUM_GROUP (NUM_STATIC_GROUP + 9) @@ -28,9 +31,17 @@ #define GROUP_ALL ODP_SCHED_GROUP_ALL #define GROUP_WORKER ODP_SCHED_GROUP_WORKER #define GROUP_CONTROL ODP_SCHED_GROUP_CONTROL -#define MAX_ORDERED_LOCKS_PER_QUEUE 1 +#define GROUP_PKTIN GROUP_ALL
-ODP_STATIC_ASSERT(MAX_ORDERED_LOCKS_PER_QUEUE <= CONFIG_QUEUE_MAX_ORD_LOCKS, +/* Maximum number of commands: one priority/group for all queues and pktios */ +#define RING_SIZE (ODP_ROUNDUP_POWER_2(NUM_QUEUE + NUM_PKTIO)) +#define RING_MASK (RING_SIZE - 1) + +/* Ring size must be power of two */ +ODP_STATIC_ASSERT(ODP_VAL_IS_POWER_2(RING_SIZE), + "Ring_size_is_not_power_of_two"); + +ODP_STATIC_ASSERT(NUM_ORDERED_LOCKS <= CONFIG_QUEUE_MAX_ORD_LOCKS, "Too_many_ordered_locks");
struct sched_cmd_t; @@ -38,6 +49,7 @@ struct sched_cmd_t; struct sched_cmd_s { struct sched_cmd_t *next; uint32_t index; + uint32_t ring_idx; int type; int prio; int group; @@ -52,38 +64,49 @@ typedef struct sched_cmd_t { sizeof(struct sched_cmd_s)]; } sched_cmd_t ODP_ALIGNED_CACHE;
-struct prio_queue_s { - odp_ticketlock_t lock; - sched_cmd_t *head; - sched_cmd_t *tail; -}; +typedef struct { + /* Ring header */ + ring_t ring; + + /* Ring data: queue indexes */ + uint32_t ring_idx[RING_SIZE];
-typedef struct prio_queue_t { - struct prio_queue_s s; - uint8_t pad[ROUNDUP_CACHE(sizeof(struct prio_queue_s)) - - sizeof(struct prio_queue_s)]; } prio_queue_t ODP_ALIGNED_CACHE;
-struct sched_group_s { - odp_ticketlock_t lock; +typedef struct thr_group_t { + /* A generation counter for fast comparison if groups have changed */ + odp_atomic_u32_t gen_cnt;
- struct { - char name[ODP_SCHED_GROUP_NAME_LEN + 1]; - odp_thrmask_t mask; - int allocated; - } group[NUM_GROUP]; -}; + /* Number of groups the thread belongs to */ + int num_group; + + /* The groups the thread belongs to */ + int group[NUM_GROUP]; + +} thr_group_t;
typedef struct sched_group_t { - struct sched_group_s s; - uint8_t pad[ROUNDUP_CACHE(sizeof(struct sched_group_s)) - - sizeof(struct sched_group_s)]; + struct { + odp_ticketlock_t lock; + + /* All groups */ + struct { + char name[ODP_SCHED_GROUP_NAME_LEN + 1]; + odp_thrmask_t mask; + int allocated; + } group[NUM_GROUP]; + + /* Per thread group information */ + thr_group_t thr[NUM_THREAD]; + + } s; + } sched_group_t ODP_ALIGNED_CACHE;
typedef struct { sched_cmd_t queue_cmd[NUM_QUEUE]; sched_cmd_t pktio_cmd[NUM_PKTIO]; - prio_queue_t prio_queue[NUM_PRIO]; + prio_queue_t prio_queue[NUM_GROUP][NUM_PRIO]; sched_group_t sched_group; } sched_global_t;
@@ -91,14 +114,37 @@ typedef struct { sched_cmd_t *cmd; int pause; int thr_id; + uint32_t gen_cnt; + int num_group; + int group[NUM_GROUP]; } sched_local_t;
static sched_global_t sched_global; static __thread sched_local_t sched_local;
+static inline uint32_t index_to_ring_idx(int pktio, uint32_t index) +{ + if (pktio) + return (0x80000000 | index); + + return index; +} + +static inline uint32_t index_from_ring_idx(uint32_t *index, uint32_t ring_idx) +{ + uint32_t pktio = ring_idx & 0x80000000; + + if (pktio) + *index = ring_idx & (~0x80000000); + else + *index = ring_idx; + + return pktio; +} + static int init_global(void) { - int i; + int i, j; sched_group_t *sched_group = &sched_global.sched_group;
ODP_DBG("Using SP scheduler\n"); @@ -106,21 +152,28 @@ static int init_global(void) memset(&sched_global, 0, sizeof(sched_global_t));
for (i = 0; i < NUM_QUEUE; i++) { - sched_global.queue_cmd[i].s.type = CMD_QUEUE; - sched_global.queue_cmd[i].s.index = i; + sched_global.queue_cmd[i].s.type = CMD_QUEUE; + sched_global.queue_cmd[i].s.index = i; + sched_global.queue_cmd[i].s.ring_idx = index_to_ring_idx(0, i); }
for (i = 0; i < NUM_PKTIO; i++) { - sched_global.pktio_cmd[i].s.type = CMD_PKTIO; - sched_global.pktio_cmd[i].s.index = i; - sched_global.pktio_cmd[i].s.prio = PKTIN_PRIO; + sched_global.pktio_cmd[i].s.type = CMD_PKTIO; + sched_global.pktio_cmd[i].s.index = i; + sched_global.pktio_cmd[i].s.ring_idx = index_to_ring_idx(1, i); + sched_global.pktio_cmd[i].s.prio = PKTIN_PRIO; + sched_global.pktio_cmd[i].s.group = GROUP_PKTIN; }
- for (i = 0; i < NUM_PRIO; i++) - odp_ticketlock_init(&sched_global.prio_queue[i].s.lock); + for (i = 0; i < NUM_GROUP; i++) + for (j = 0; j < NUM_PRIO; j++) + ring_init(&sched_global.prio_queue[i][j].ring);
odp_ticketlock_init(&sched_group->s.lock);
+ for (i = 0; i < NUM_THREAD; i++) + odp_atomic_init_u32(&sched_group->s.thr[i].gen_cnt, 0); + strncpy(sched_group->s.group[GROUP_ALL].name, "__group_all", ODP_SCHED_GROUP_NAME_LEN); odp_thrmask_zero(&sched_group->s.group[GROUP_ALL].mask); @@ -168,7 +221,48 @@ static int term_local(void)
static unsigned max_ordered_locks(void) { - return MAX_ORDERED_LOCKS_PER_QUEUE; + return NUM_ORDERED_LOCKS; +} + +static void add_group(sched_group_t *sched_group, int thr, int group) +{ + int num; + uint32_t gen_cnt; + thr_group_t *thr_group = &sched_group->s.thr[thr]; + + num = thr_group->num_group; + thr_group->group[num] = group; + thr_group->num_group = num + 1; + gen_cnt = odp_atomic_load_u32(&thr_group->gen_cnt); + odp_atomic_store_u32(&thr_group->gen_cnt, gen_cnt + 1); +} + +static void remove_group(sched_group_t *sched_group, int thr, int group) +{ + int i, num; + int found = 0; + thr_group_t *thr_group = &sched_group->s.thr[thr]; + + num = thr_group->num_group; + + for (i = 0; i < num; i++) { + if (thr_group->group[i] == group) { + found = 1; + + for (; i < num - 1; i++) + thr_group->group[i] = thr_group->group[i + 1]; + + break; + } + } + + if (found) { + uint32_t gen_cnt; + + thr_group->num_group = num - 1; + gen_cnt = odp_atomic_load_u32(&thr_group->gen_cnt); + odp_atomic_store_u32(&thr_group->gen_cnt, gen_cnt + 1); + } }
static int thr_add(odp_schedule_group_t group, int thr) @@ -178,6 +272,9 @@ static int thr_add(odp_schedule_group_t group, int thr) if (group < 0 || group >= NUM_GROUP) return -1;
+ if (thr < 0 || thr >= NUM_THREAD) + return -1; + odp_ticketlock_lock(&sched_group->s.lock);
if (!sched_group->s.group[group].allocated) { @@ -186,6 +283,7 @@ static int thr_add(odp_schedule_group_t group, int thr) }
odp_thrmask_set(&sched_group->s.group[group].mask, thr); + add_group(sched_group, thr, group);
odp_ticketlock_unlock(&sched_group->s.lock);
@@ -208,6 +306,8 @@ static int thr_rem(odp_schedule_group_t group, int thr)
odp_thrmask_clr(&sched_group->s.group[group].mask, thr);
+ remove_group(sched_group, thr, group); + odp_ticketlock_unlock(&sched_group->s.lock);
return 0; @@ -250,51 +350,34 @@ static void destroy_queue(uint32_t qi) static inline void add_tail(sched_cmd_t *cmd) { prio_queue_t *prio_queue; + int group = cmd->s.group; + int prio = cmd->s.prio; + uint32_t idx = cmd->s.ring_idx;
- prio_queue = &sched_global.prio_queue[cmd->s.prio]; - cmd->s.next = NULL; - - odp_ticketlock_lock(&prio_queue->s.lock); - - if (prio_queue->s.head == NULL) - prio_queue->s.head = cmd; - else - prio_queue->s.tail->s.next = cmd; - - prio_queue->s.tail = cmd; + prio_queue = &sched_global.prio_queue[group][prio];
- odp_ticketlock_unlock(&prio_queue->s.lock); + ring_enq(&prio_queue->ring, RING_MASK, idx); }
-static inline sched_cmd_t *rem_head(int prio) +static inline sched_cmd_t *rem_head(int group, int prio) { prio_queue_t *prio_queue; - sched_cmd_t *cmd; - - prio_queue = &sched_global.prio_queue[prio]; + uint32_t ring_idx, index; + int pktio;
- odp_ticketlock_lock(&prio_queue->s.lock); + prio_queue = &sched_global.prio_queue[group][prio];
- if (prio_queue->s.head == NULL) { - cmd = NULL; - } else { - sched_group_t *sched_group = &sched_global.sched_group; + ring_idx = ring_deq(&prio_queue->ring, RING_MASK);
- cmd = prio_queue->s.head; + if (ring_idx == RING_EMPTY) + return NULL;
- /* Remove head cmd only if thread belongs to the - * scheduler group. Otherwise continue to the next priority - * queue. */ - if (odp_thrmask_isset(&sched_group->s.group[cmd->s.group].mask, - sched_local.thr_id)) - prio_queue->s.head = cmd->s.next; - else - cmd = NULL; - } + pktio = index_from_ring_idx(&index, ring_idx);
- odp_ticketlock_unlock(&prio_queue->s.lock); + if (pktio) + return &sched_global.pktio_cmd[index];
- return cmd; + return &sched_global.queue_cmd[index]; }
static int sched_queue(uint32_t qi) @@ -341,15 +424,43 @@ static void pktio_start(int pktio_index, int num, int pktin_idx[]) add_tail(cmd); }
-static inline sched_cmd_t *sched_cmd(int num_prio) +static inline sched_cmd_t *sched_cmd(void) { - int prio; + int prio, i; + int thr = sched_local.thr_id; + sched_group_t *sched_group = &sched_global.sched_group; + thr_group_t *thr_group = &sched_group->s.thr[thr]; + uint32_t gen_cnt; + + /* There's no matching store_rel since the value is updated while + * holding a lock */ + gen_cnt = odp_atomic_load_acq_u32(&thr_group->gen_cnt); + + /* Check if groups have changed and need to be read again */ + if (odp_unlikely(gen_cnt != sched_local.gen_cnt)) { + int num_grp; + + odp_ticketlock_lock(&sched_group->s.lock); + + num_grp = thr_group->num_group; + gen_cnt = odp_atomic_load_u32(&thr_group->gen_cnt);
- for (prio = 0; prio < num_prio; prio++) { - sched_cmd_t *cmd = rem_head(prio); + for (i = 0; i < num_grp; i++) + sched_local.group[i] = thr_group->group[i];
- if (cmd) - return cmd; + odp_ticketlock_unlock(&sched_group->s.lock); + + sched_local.num_group = num_grp; + sched_local.gen_cnt = gen_cnt; + } + + for (i = 0; i < sched_local.num_group; i++) { + for (prio = 0; prio < NUM_PRIO; prio++) { + sched_cmd_t *cmd = rem_head(sched_local.group[i], prio); + + if (cmd) + return cmd; + } }
return NULL; @@ -382,7 +493,7 @@ static int schedule_multi(odp_queue_t *from, uint64_t wait, uint32_t qi; int num;
- cmd = sched_cmd(NUM_PRIO); + cmd = sched_cmd();
if (cmd && cmd->s.type == CMD_PKTIO) { if (sched_cb_pktin_poll(cmd->s.index, cmd->s.num_pktin, @@ -565,11 +676,14 @@ static odp_schedule_group_t schedule_group_lookup(const char *name) static int schedule_group_join(odp_schedule_group_t group, const odp_thrmask_t *thrmask) { + int thr; sched_group_t *sched_group = &sched_global.sched_group;
if (group < 0 || group >= NUM_GROUP) return -1;
+ thr = odp_thrmask_first(thrmask); + odp_ticketlock_lock(&sched_group->s.lock);
if (!sched_group->s.group[group].allocated) { @@ -581,6 +695,11 @@ static int schedule_group_join(odp_schedule_group_t group, &sched_group->s.group[group].mask, thrmask);
+ while (thr >= 0) { + add_group(sched_group, thr, group); + thr = odp_thrmask_next(thrmask, thr); + } + odp_ticketlock_unlock(&sched_group->s.lock);
return 0; @@ -589,6 +708,7 @@ static int schedule_group_join(odp_schedule_group_t group, static int schedule_group_leave(odp_schedule_group_t group, const odp_thrmask_t *thrmask) { + int thr; sched_group_t *sched_group = &sched_global.sched_group; odp_thrmask_t *all = &sched_group->s.group[GROUP_ALL].mask; odp_thrmask_t not; @@ -596,6 +716,8 @@ static int schedule_group_leave(odp_schedule_group_t group, if (group < 0 || group >= NUM_GROUP) return -1;
+ thr = odp_thrmask_first(thrmask); + odp_ticketlock_lock(&sched_group->s.lock);
if (!sched_group->s.group[group].allocated) { @@ -608,6 +730,11 @@ static int schedule_group_leave(odp_schedule_group_t group, &sched_group->s.group[group].mask, ¬);
+ while (thr >= 0) { + remove_group(sched_group, thr, group); + thr = odp_thrmask_next(thrmask, thr); + } + odp_ticketlock_unlock(&sched_group->s.lock);
return 0;
commit fe182fb0a97c1989747ae96b401a10d34c878480 Author: Christophe Milard christophe.milard@linaro.org Date: Tue Dec 6 18:25:32 2016 +0100
linux-gen: _ishm: unlinking files asap for cleaner termination
_ishm now unlinks the created files as soon as possible, hence reducing the chance to see left-overs, if ODP terminates abnormaly. This does not provide 100% guaranty: if we are unlucky enough, ODP may be killed between open() and unlink(). Also this method excludes exported files (flag _ODP_ISHM_EXPORT), whose names shall be seen in the file system.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index a0188ad..33ef731 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -70,6 +70,7 @@ #include <sys/types.h> #include <inttypes.h> #include <sys/wait.h> +#include <libgen.h>
/* * Maximum number of internal shared memory blocks. @@ -159,6 +160,7 @@ typedef struct ishm_block { char exptname[ISHM_FILENAME_MAXLEN]; /* name of the export file */ uint32_t user_flags; /* any flags the user want to remember. */ uint32_t flags; /* block creation flags. */ + uint32_t external_fd:1; /* block FD was externally provided */ uint64_t user_len; /* length, as requested at reserve time. */ void *start; /* only valid if _ODP_ISHM_SINGLE_VA is set*/ uint64_t len; /* length. multiple of page size. 0 if free*/ @@ -452,15 +454,17 @@ static int create_file(int block_index, huge_flag_t huge, uint64_t len, ODP_ERR("ftruncate failed: fd=%d, err=%s.\n", fd, strerror(errno)); close(fd); + unlink(filename); return -1; }
- strncpy(new_block->filename, filename, ISHM_FILENAME_MAXLEN - 1);
/* if _ODP_ISHM_EXPORT is set, create a description file for * external ref: */ if (flags & _ODP_ISHM_EXPORT) { + strncpy(new_block->filename, filename, + ISHM_FILENAME_MAXLEN - 1); snprintf(new_block->exptname, ISHM_FILENAME_MAXLEN, ISHM_EXPTNAME_FORMAT, odp_global_data.main_pid, @@ -483,6 +487,8 @@ static int create_file(int block_index, huge_flag_t huge, uint64_t len, } } else { new_block->exptname[0] = 0; + /* remove the file from the filesystem, keeping its fd open */ + unlink(filename); }
return fd; @@ -814,6 +820,9 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, return -1; } new_block->huge = EXTERNAL; + new_block->external_fd = 1; + } else { + new_block->external_fd = 0; }
/* Otherwise, Try first huge pages when possible and needed: */ @@ -865,8 +874,9 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd,
/* if neither huge pages or normal pages works, we cannot proceed: */ if ((fd < 0) || (addr == NULL) || (len == 0)) { - if ((new_block->filename[0]) && (fd >= 0)) + if ((!new_block->external_fd) && (fd >= 0)) close(fd); + delete_file(new_block); odp_spinlock_unlock(&ishm_tbl->lock); ODP_ERR("_ishm_reserve failed.\n"); return -1;
commit 82c67c0b1755f4d4e0f5b1df2e6356150cca4166 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:30 2016 +0100
test: linux-gen: api: shmem: test sharing memory between ODP instances
The platform tests odp/test/linux-generic/validation/api/shmem are updated to both test ODP<->linux process memory sharing, but also test ODP to ODP (different instances) memory sharing. shmem_linux is the main test process, and shmem_linux.c contains (at file top) a chart flow of the test procedure.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/linux-generic/validation/api/shmem/.gitignore b/test/linux-generic/validation/api/shmem/.gitignore index 7627079..74195f5 100644 --- a/test/linux-generic/validation/api/shmem/.gitignore +++ b/test/linux-generic/validation/api/shmem/.gitignore @@ -1,2 +1,3 @@ shmem_linux -shmem_odp +shmem_odp1 +shmem_odp2 diff --git a/test/linux-generic/validation/api/shmem/Makefile.am b/test/linux-generic/validation/api/shmem/Makefile.am index 341747f..b0ae627 100644 --- a/test/linux-generic/validation/api/shmem/Makefile.am +++ b/test/linux-generic/validation/api/shmem/Makefile.am @@ -2,19 +2,27 @@ include ../Makefile.inc
#the main test program is shmem_linux, which, in turn, starts a shmem_odp: test_PROGRAMS = shmem_linux$(EXEEXT) -test_extra_PROGRAMS = shmem_odp$(EXEEXT) +test_extra_PROGRAMS = shmem_odp1$(EXEEXT) shmem_odp2$(EXEEXT) test_extradir = $(testdir)
#shmem_linux is stand alone, pure linux (no ODP): dist_shmem_linux_SOURCES = shmem_linux.c shmem_linux_LDFLAGS = $(AM_LDFLAGS) -lrt
-#shmem_odp is the odp part: -dist_shmem_odp_SOURCES = shmem_odp.c -shmem_odp_CFLAGS = $(AM_CFLAGS) \ +#shmem_odp1 and shmem_odp2 are the 2 ODP processes: +dist_shmem_odp1_SOURCES = shmem_odp1.c +shmem_odp1_CFLAGS = $(AM_CFLAGS) \ $(INCCUNIT_COMMON) \ $(INCODP) -shmem_odp_LDFLAGS = $(AM_LDFLAGS) -shmem_odp_LDADD = $(LIBCUNIT_COMMON) $(LIBODP) +shmem_odp1_LDFLAGS = $(AM_LDFLAGS) +shmem_odp1_LDADD = $(LIBCUNIT_COMMON) $(LIBODP)
-noinst_HEADERS = shmem_common.h shmem_linux.h shmem_odp.h +dist_shmem_odp2_SOURCES = shmem_odp2.c +shmem_odp2_CFLAGS = $(AM_CFLAGS) \ + $(INCCUNIT_COMMON) \ + $(INCODP) +shmem_odp2_LDFLAGS = $(AM_LDFLAGS) +shmem_odp2_LDADD = $(LIBCUNIT_COMMON) $(LIBODP) + + +noinst_HEADERS = shmem_common.h shmem_linux.h shmem_odp1.h shmem_odp2.h diff --git a/test/linux-generic/validation/api/shmem/shmem_linux.c b/test/linux-generic/validation/api/shmem/shmem_linux.c index 9ab0e2b..39473f3 100644 --- a/test/linux-generic/validation/api/shmem/shmem_linux.c +++ b/test/linux-generic/validation/api/shmem/shmem_linux.c @@ -5,8 +5,10 @@ */
/* this test makes sure that odp shared memory created with the ODP_SHM_PROC - * flag is visible under linux. It therefore checks both that the device - * name under /dev/shm is correct, and also checks that the memory contents + * flag is visible under linux, and checks that memory created with the + * ODP_SHM_EXPORT flag is visible by other ODP instances. + * It therefore checks both that the link + * name under /tmp is correct, and also checks that the memory contents * is indeed shared. * we want: * -the odp test to run using C UNIT @@ -15,18 +17,47 @@ * * To achieve this, the flow of operations is as follows: * - * linux process (main, non odp) | ODP process - * (shmem_linux.c) | (shmem_odp.c) + * linux process (main, non odp) | + * (shmem_linux.c) | + * | + * | * | * main() | - * forks odp process | allocate shmem - * wait for named pipe creation | populate shmem + * forks odp_app1 process | + * wait for named pipe creation | + * | + * | ODP_APP1 process + * | (shmem_odp1.c) + * | + * | allocate shmem + * | populate shmem * | create named pipe - * read shared memory | wait for test report in fifo + * | wait for test report in fifo... + * read shared memory | * check if memory contents is OK | - * if OK, write "S" in fifo, else "F" | report success or failure to C-Unit - * wait for child terminaison & status| terminate with usual F/S status + * If not OK, write "F" in fifo and | + * exit with failure code. | ------------------- + * | + * forks odp app2 process | ODP APP2 process + * wait for child terminaison & status| (shmem_odp2.c) + * | lookup ODP_APP1 shared memory, + * | check if memory contents is OK + * | Exit(0) on success, exit(1) on fail + * If child failed, write "F" in fifo | + * exit with failure code. | ------------------- + * | + * OK, write "S" in fifo, | + * wait for child terminaison & status| * terminate with same status as child| + * | ODP APP1 process + * | (shmem_odp1.c) + * | + * | ...(continued) + * | read S(success) or F(fail) from fifo + * | report success or failure to C-Unit + * | Exit(0) on success, exit(1) on fail + * wait for child terminaison & status | + * terminate with same status as child | * | * |/ * time @@ -49,7 +80,8 @@ #include "shmem_linux.h" #include "shmem_common.h"
-#define ODP_APP_NAME "shmem_odp" /* name of the odp program, in this dir */ +#define ODP_APP1_NAME "shmem_odp1" /* name of the odp1 program, in this dir */ +#define ODP_APP2_NAME "shmem_odp2" /* name of the odp2 program, in this dir */ #define DEVNAME_FMT "/tmp/odp-%" PRIu64 "-shm-%s" /* odp-<pid>-shm-<name> */ #define MAX_FIFO_WAIT 30 /* Max time waiting for the fifo (sec) */
@@ -117,7 +149,7 @@ void test_success(char *fifo_name, int fd, pid_t odp_app) /* write "Success" to the FIFO */ nb_char = write(fd, &result, sizeof(char)); close(fd); - /* wait for the odp app to terminate */ + /* wait for the odp app1 to terminate */ waitpid(odp_app, &status, 0); /* if the write failed, report an error anyway */ if (nb_char != 1) @@ -134,10 +166,10 @@ void test_failure(char *fifo_name, int fd, pid_t odp_app) int nb_char __attribute__((unused)); /*ignored: we fail anyway */
result = TEST_FAILURE; - /* write "Success" to the FIFO */ + /* write "Failure" to the FIFO */ nb_char = write(fd, &result, sizeof(char)); close(fd); - /* wait for the odp app to terminate */ + /* wait for the odp app1 to terminate */ waitpid(odp_app, &status, 0); unlink(fifo_name); exit(1); /* error */ @@ -146,36 +178,43 @@ void test_failure(char *fifo_name, int fd, pid_t odp_app) int main(int argc __attribute__((unused)), char *argv[]) { char prg_name[PATH_MAX]; - char odp_name[PATH_MAX]; + char odp_name1[PATH_MAX]; + char odp_name2[PATH_MAX]; int nb_sec; - uint64_t size; - pid_t odp_app; - char *odp_params = NULL; + int size; + pid_t odp_app1; + pid_t odp_app2; + char *odp_params1 = NULL; + char *odp_params2[3]; + char pid1[10]; char fifo_name[PATH_MAX]; /* fifo for linux->odp feedback */ int fifo_fd = -1; - char shm_devname[PATH_MAX];/* shared mem device name.*/ + char shm_filename[PATH_MAX];/* shared mem device name, under /dev/shm */ uint64_t len; uint32_t flags; uint32_t align; int shm_fd; test_shared_linux_data_t *addr; + int app2_status;
- /* odp app is in the same directory as this file: */ + /* odp_app1 is in the same directory as this file: */ strncpy(prg_name, argv[0], PATH_MAX - 1); - sprintf(odp_name, "%s/%s", dirname(prg_name), ODP_APP_NAME); + sprintf(odp_name1, "%s/%s", dirname(prg_name), ODP_APP1_NAME);
/* start the ODP application: */ - odp_app = fork(); - if (odp_app < 0) /* error */ + odp_app1 = fork(); + if (odp_app1 < 0) /* error */ exit(1);
- if (odp_app == 0) /* child */ - execv(odp_name, &odp_params); + if (odp_app1 == 0) { /* child */ + execv(odp_name1, &odp_params1); /* no return unless error */ + fprintf(stderr, "execv failed: %s\n", strerror(errno)); + }
/* wait max 30 sec for the fifo to be created by the ODP side. * Just die if time expire as there is no fifo to communicate * through... */ - sprintf(fifo_name, FIFO_NAME_FMT, odp_app); + sprintf(fifo_name, FIFO_NAME_FMT, odp_app1); for (nb_sec = 0; nb_sec < MAX_FIFO_WAIT; nb_sec++) { fifo_fd = open(fifo_name, O_WRONLY); if (fifo_fd >= 0) @@ -191,17 +230,17 @@ int main(int argc __attribute__((unused)), char *argv[]) * check to see if linux can see the created shared memory: */
/* read the shared memory attributes (includes the shm filename): */ - if (read_shmem_attribues(odp_app, ODP_SHM_NAME, - shm_devname, &len, &flags, &align) != 0) - test_failure(fifo_name, fifo_fd, odp_app); + if (read_shmem_attribues(odp_app1, ODP_SHM_NAME, + shm_filename, &len, &flags, &align) != 0) + test_failure(fifo_name, fifo_fd, odp_app1);
/* open the shm filename (which is either on /tmp or on hugetlbfs) * O_CREAT flag not given => failure if shm_devname does not already * exist */ - shm_fd = open(shm_devname, O_RDONLY, + shm_fd = open(shm_filename, O_RDONLY, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); if (shm_fd == -1) - test_failure(fifo_name, fifo_fd, odp_app); + test_failure(fifo_name, fifo_fd, odp_app1); /* no return */
/* linux ODP guarantees page size alignement. Larger alignment may * fail as 2 different processes will have fully unrelated @@ -210,12 +249,41 @@ int main(int argc __attribute__((unused)), char *argv[]) size = sizeof(test_shared_linux_data_t);
addr = mmap(NULL, size, PROT_READ, MAP_SHARED, shm_fd, 0); - if (addr == MAP_FAILED) - test_failure(fifo_name, fifo_fd, odp_app); + if (addr == MAP_FAILED) { + printf("shmem_linux: map failed!\n"); + test_failure(fifo_name, fifo_fd, odp_app1); + }
/* check that we see what the ODP application wrote in the memory */ - if ((addr->foo == TEST_SHARE_FOO) && (addr->bar == TEST_SHARE_BAR)) - test_success(fifo_name, fifo_fd, odp_app); - else - test_failure(fifo_name, fifo_fd, odp_app); + if ((addr->foo != TEST_SHARE_FOO) || (addr->bar != TEST_SHARE_BAR)) + test_failure(fifo_name, fifo_fd, odp_app1); /* no return */ + + /* odp_app2 is in the same directory as this file: */ + strncpy(prg_name, argv[0], PATH_MAX - 1); + sprintf(odp_name2, "%s/%s", dirname(prg_name), ODP_APP2_NAME); + + /* start the second ODP application with pid of ODP_APP1 as parameter:*/ + sprintf(pid1, "%d", odp_app1); + odp_params2[0] = odp_name2; + odp_params2[1] = pid1; + odp_params2[2] = NULL; + odp_app2 = fork(); + if (odp_app2 < 0) /* error */ + exit(1); + + if (odp_app2 == 0) { /* child */ + execv(odp_name2, odp_params2); /* no return unless error */ + fprintf(stderr, "execv failed: %s\n", strerror(errno)); + } + + /* wait for the second ODP application to terminate: + * status is OK if that second ODP application could see the + * memory shared by the first one. */ + waitpid(odp_app2, &app2_status, 0); + + if (app2_status) + test_failure(fifo_name, fifo_fd, odp_app1); /* no return */ + + /* everything looked good: */ + test_success(fifo_name, fifo_fd, odp_app1); } diff --git a/test/linux-generic/validation/api/shmem/shmem_odp.c b/test/linux-generic/validation/api/shmem/shmem_odp1.c similarity index 81% rename from test/linux-generic/validation/api/shmem/shmem_odp.c rename to test/linux-generic/validation/api/shmem/shmem_odp1.c index a1f750f..3869c2e 100644 --- a/test/linux-generic/validation/api/shmem/shmem_odp.c +++ b/test/linux-generic/validation/api/shmem/shmem_odp1.c @@ -13,7 +13,7 @@ #include <fcntl.h>
#include <odp_cunit_common.h> -#include "shmem_odp.h" +#include "shmem_odp1.h" #include "shmem_common.h"
#define TEST_SHARE_FOO (0xf0f0f0f0) @@ -27,9 +27,10 @@ void shmem_test_odp_shm_proc(void) test_shared_data_t *test_shared_data; char test_result;
+ /* reminder: ODP_SHM_PROC => export to linux, ODP_SHM_EXPORT=>to odp */ shm = odp_shm_reserve(ODP_SHM_NAME, sizeof(test_shared_data_t), - ALIGN_SIZE, ODP_SHM_PROC); + ALIGN_SIZE, ODP_SHM_PROC | ODP_SHM_EXPORT); CU_ASSERT_FATAL(ODP_SHM_INVALID != shm); test_shared_data = odp_shm_addr(shm); CU_ASSERT_FATAL(NULL != test_shared_data); @@ -39,15 +40,18 @@ void shmem_test_odp_shm_proc(void) odp_mb_full();
/* open the fifo: this will indicate to linux process that it can - * start the shmem lookup and check if it sees the data */ + * start the shmem lookups and check if it sees the data */ sprintf(fifo_name, FIFO_NAME_FMT, getpid()); CU_ASSERT_FATAL(mkfifo(fifo_name, 0666) == 0);
/* read from the fifo: the linux process result: */ + printf("shmem_odp1: opening fifo: %s\n", fifo_name); fd = open(fifo_name, O_RDONLY); CU_ASSERT_FATAL(fd >= 0);
+ printf("shmem_odp1: reading fifo: %s\n", fifo_name); CU_ASSERT(read(fd, &test_result, sizeof(char)) == 1); + printf("shmem_odp1: closing fifo: %s\n", fifo_name); close(fd); CU_ASSERT_FATAL(test_result == TEST_SUCCESS);
diff --git a/test/linux-generic/validation/api/shmem/shmem_odp.h b/test/linux-generic/validation/api/shmem/shmem_odp1.h similarity index 100% copy from test/linux-generic/validation/api/shmem/shmem_odp.h copy to test/linux-generic/validation/api/shmem/shmem_odp1.h diff --git a/test/linux-generic/validation/api/shmem/shmem_odp2.c b/test/linux-generic/validation/api/shmem/shmem_odp2.c new file mode 100644 index 0000000..e39dc76 --- /dev/null +++ b/test/linux-generic/validation/api/shmem/shmem_odp2.c @@ -0,0 +1,95 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include <odp.h> +#include <linux/limits.h> +#include <sys/types.h> +#include <unistd.h> +#include <stdio.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <stdlib.h> + +#include <odp_cunit_common.h> +#include "shmem_odp2.h" +#include "shmem_common.h" + +#define TEST_SHARE_FOO (0xf0f0f0f0) +#define TEST_SHARE_BAR (0xf0f0f0f) + +/* The C unit test harness is run by ODP1 app which will be told the return + * staus of this process. See top of shmem_linux.c for chart flow of events + */ +int main(int argc, char *argv[]) +{ + odp_instance_t odp1; + odp_instance_t odp2; + odp_shm_t shm; + test_shared_data_t *test_shared_data; + + /* odp init: */ + if (0 != odp_init_global(&odp2, NULL, NULL)) { + fprintf(stderr, "error: odp_init_global() failed.\n"); + return 1; + } + if (0 != odp_init_local(odp2, ODP_THREAD_CONTROL)) { + fprintf(stderr, "error: odp_init_local() failed.\n"); + return 1; + } + + /* test: map ODP1 memory and check its contents: + * The pid of the ODP instantiation process sharing its memory + * is given as first arg. In linux-generic ODP, this pid is actually + * the ODP instance */ + if (argc != 2) { + fprintf(stderr, "One single parameter expected, %d found.\n", + argc); + return 1; + } + odp1 = (odp_instance_t)atoi(argv[1]); + + printf("shmem_odp2: trying to grab %s from pid %d\n", + ODP_SHM_NAME, (int)odp1); + shm = odp_shm_import(ODP_SHM_NAME, odp1, ODP_SHM_NAME); + if (shm == ODP_SHM_INVALID) { + fprintf(stderr, "error: odp_shm_lookup_external failed.\n"); + return 1; + } + + test_shared_data = odp_shm_addr(shm); + if (test_shared_data == NULL) { + fprintf(stderr, "error: odp_shm_addr failed.\n"); + return 1; + } + + if (test_shared_data->foo != TEST_SHARE_FOO) { + fprintf(stderr, "error: Invalid data TEST_SHARE_FOO.\n"); + return 1; + } + + if (test_shared_data->bar != TEST_SHARE_BAR) { + fprintf(stderr, "error: Invalid data TEST_SHARE_BAR.\n"); + return 1; + } + + if (odp_shm_free(shm) != 0) { + fprintf(stderr, "error: odp_shm_free() failed.\n"); + return 1; + } + + /* odp term: */ + if (0 != odp_term_local()) { + fprintf(stderr, "error: odp_term_local() failed.\n"); + return 1; + } + + if (0 != odp_term_global(odp2)) { + fprintf(stderr, "error: odp_term_global() failed.\n"); + return 1; + } + + return 0; +} diff --git a/test/linux-generic/validation/api/shmem/shmem_odp.h b/test/linux-generic/validation/api/shmem/shmem_odp2.h similarity index 76% rename from test/linux-generic/validation/api/shmem/shmem_odp.h rename to test/linux-generic/validation/api/shmem/shmem_odp2.h index 614bbf8..a8db909 100644 --- a/test/linux-generic/validation/api/shmem/shmem_odp.h +++ b/test/linux-generic/validation/api/shmem/shmem_odp2.h @@ -4,4 +4,4 @@ * SPDX-License-Identifier: BSD-3-Clause */
-void shmem_test_odp_shm_proc(void); +int main(int argc, char *argv[]);
commit 80a30e1513a9614622e08657b94dca56db9e250f Author: Bill Fischofer bill.fischofer@linaro.org Date: Mon Dec 12 09:06:17 2016 -0600
doc: userguide: expand crypto documentation to cover random apis
Clean up the crypto section of the User Guide and expand on the ODP random data APIs.
Signed-off-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/doc/users-guide/users-guide-crypto.adoc b/doc/users-guide/users-guide-crypto.adoc index 04b3e87..c18e369 100644 --- a/doc/users-guide/users-guide-crypto.adoc +++ b/doc/users-guide/users-guide-crypto.adoc @@ -1,7 +1,8 @@ == Cryptographic services
ODP provides APIs to perform cryptographic operations required by various -communication protocols (e.g. IPSec). ODP cryptographic APIs are session based. +communication protocols (_e.g.,_ IPsec). ODP cryptographic APIs are session +based.
ODP provides APIs for following cryptographic services:
@@ -19,24 +20,26 @@ ODP supports synchronous and asynchronous crypto sessions. For asynchronous sessions, the output of crypto operation is posted in a queue defined as the completion queue in its session parameters.
-ODP crypto APIs support chained operation sessions in which hashing and ciphering -both can be achieved using a single session and operation call. The order of -cipher and hashing can be controlled by the `auth_cipher_text` session parameter. +ODP crypto APIs support chained operation sessions in which hashing and +ciphering both can be achieved using a single session and operation call. The +order of cipher and hashing can be controlled by the `auth_cipher_text` +session parameter.
Other Session parameters include algorithms, keys, initialization vector -(optional), encode or decode, output queue for async mode and output packet pool -for allocation of an output packet if required. +(optional), encode or decode, output queue for async mode and output packet +pool for allocation of an output packet if required.
=== Crypto operations
After session creation, a cryptographic operation can be applied to a packet using the `odp_crypto_operation()` API. Applications may indicate a preference -for synchronous or asynchronous processing in the session's `pref_mode` parameter. -However crypto operations may complete synchronously even if an asynchronous -preference is indicated, and applications must examine the `posted` output -parameter from `odp_crypto_operation()` to determine whether the operation has -completed or if an `ODP_EVENT_CRYPTO_COMPL` notification is expected. In the case -of an async operation, the `posted` output parameter will be set to true. +for synchronous or asynchronous processing in the session's `pref_mode` +parameter. However crypto operations may complete synchronously even if an +asynchronous preference is indicated, and applications must examine the +`posted` output parameter from `odp_crypto_operation()` to determine whether +the operation has completed or if an `ODP_EVENT_CRYPTO_COMPL` notification is +expected. In the case of an async operation, the `posted` output parameter +will be set to true.
The operation arguments specify for each packet the areas that are to be @@ -49,9 +52,9 @@ In case of out-of-place mode output packet is different from input packet as specified by the application, while in new buffer mode implementation allocates a new output buffer from the session’s output pool.
-The application can also specify a context associated with a given operation that -will be retained during async operation and can be retrieved via the completion -event. +The application can also specify a context associated with a given operation +that will be retained during async operation and can be retrieved via the +completion event.
Results of an asynchronous session will be posted as completion events to the session’s completion queue, which can be accessed directly or via the ODP @@ -60,12 +63,60 @@ result. The application has the responsibility to free the completion event.
=== Random number Generation
-ODP provides an API `odp_random_data()` to generate random data bytes. It has -an argument to specify whether to use system entropy source for random number -generation or not. +ODP provides two APIs to generate various kinds of random data bytes. Random +data is characterized by _kind_, which specifies the "quality" of the +randomness required. ODP support three kinds of random data: + +ODP_RANDOM_BASIC:: No specific requirement other than the data appear to be +uniformly distributed. Suitable for load-balancing or other non-cryptographic +use. + +ODP_RANDOM_CRYPTO:: Data suitable for cryptographic use. This is a more +stringent requirement that the data pass tests for statistical randomness. + +ODP_RANDOM_TRUE:: Data generated from a hardware entropy source rather than +any software generated pseudo-random data. May not be available on all +platforms. + +These form a hierarchy with BASIC being the lowest kind of random and TRUE +behing the highest. The main API for accessing random data is: + +[source,c] +----- +int32_t odp_random_data(uint8_t buf, uint32_t len, odp_random_kind_t kind); +----- + +The expectation is that lesser-quality random is easier and faster to generate +while higher-quality random may take more time. Implementations are always free +to substitute a higher kind of random than the one requested if they are able +to do so more efficiently, however calls must return a failure indicator +(rc < 0) if a higher kind of data is requested than the implementation can +provide. This is most likely the case for ODP_RANDOM_TRUE since not all +platforms have access to a true hardware random number generator. + +The `odp_random_max_kind()` API returns the highest kind of random data +available on this implementation. + +For testing purposes it is often desirable to generate repeatable sequences +of "random" data. To address this need ODP provides the additional API: + +[source,c] +----- +int32_t odp_random_test_data(uint8_t buf, uint32_t len, uint64_t *seed); +----- + +This operates the same as `odp_random_data()` except that it always returns +data of kind `ODP_RANDOM_BASIC` and an additional thread-local `seed` +parameter is provide that specifies a seed value to use in generating the +data. This value is updated on each call, so repeated calls with the same +variable will generate a sequence of random data starting from the initial +specified seed. If another sequence of calls is made starting with the same +initial seed value, then `odp_random_test_data()` will return the same +sequence of data bytes.
=== Capability inquiries
-ODP provides an API interface `odp_crypto_capability()` to inquire implementation’s -crypto capabilities. This interface returns a bitmask for supported algorithms -and hardware backed algorithms. +ODP provides the API `odp_crypto_capability()` to inquire the implementation’s +crypto capabilities. This interface returns a the maximum number of crypto +sessions supported as well as bitmasks for supported algorithms and hardware +backed algorithms. \ No newline at end of file
commit 8ab727c5f3a5b5aa556a84e04870f1f3fe3a073b Author: Bill Fischofer bill.fischofer@linaro.org Date: Mon Dec 12 09:06:16 2016 -0600
doc: userguide: move crypto documentation to its own sub-document
Signed-off-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/doc/users-guide/Makefile.am b/doc/users-guide/Makefile.am index a01c717..01b4df3 100644 --- a/doc/users-guide/Makefile.am +++ b/doc/users-guide/Makefile.am @@ -2,6 +2,7 @@ include ../Makefile.inc
SRC = $(top_srcdir)/doc/users-guide/users-guide.adoc \ $(top_srcdir)/doc/users-guide/users-guide-cls.adoc \ + $(top_srcdir)/doc/users-guide/users-guide-crypto.adoc \ $(top_srcdir)/doc/users-guide/users-guide-packet.adoc \ $(top_srcdir)/doc/users-guide/users-guide-pktio.adoc \ $(top_srcdir)/doc/users-guide/users-guide-timer.adoc \ diff --git a/doc/users-guide/users-guide-crypto.adoc b/doc/users-guide/users-guide-crypto.adoc new file mode 100644 index 0000000..04b3e87 --- /dev/null +++ b/doc/users-guide/users-guide-crypto.adoc @@ -0,0 +1,71 @@ +== Cryptographic services + +ODP provides APIs to perform cryptographic operations required by various +communication protocols (e.g. IPSec). ODP cryptographic APIs are session based. + +ODP provides APIs for following cryptographic services: + +* Ciphering +* Authentication/data integrity via Keyed-Hashing (HMAC) +* Random number generation +* Crypto capability inquiries + +=== Crypto Sessions + +To apply a cryptographic operation to a packet a session must be created. All +packets processed by a session share the parameters that define the session. + +ODP supports synchronous and asynchronous crypto sessions. For asynchronous +sessions, the output of crypto operation is posted in a queue defined as +the completion queue in its session parameters. + +ODP crypto APIs support chained operation sessions in which hashing and ciphering +both can be achieved using a single session and operation call. The order of +cipher and hashing can be controlled by the `auth_cipher_text` session parameter. + +Other Session parameters include algorithms, keys, initialization vector +(optional), encode or decode, output queue for async mode and output packet pool +for allocation of an output packet if required. + +=== Crypto operations + +After session creation, a cryptographic operation can be applied to a packet +using the `odp_crypto_operation()` API. Applications may indicate a preference +for synchronous or asynchronous processing in the session's `pref_mode` parameter. +However crypto operations may complete synchronously even if an asynchronous +preference is indicated, and applications must examine the `posted` output +parameter from `odp_crypto_operation()` to determine whether the operation has +completed or if an `ODP_EVENT_CRYPTO_COMPL` notification is expected. In the case +of an async operation, the `posted` output parameter will be set to true. + + +The operation arguments specify for each packet the areas that are to be +encrypted or decrypted and authenticated. Also, there is an option of overriding +the initialization vector specified in session parameters. + +An operation can be executed in in-place, out-of-place or new buffer mode. +In in-place mode output packet is same as the input packet. +In case of out-of-place mode output packet is different from input packet as +specified by the application, while in new buffer mode implementation allocates +a new output buffer from the session’s output pool. + +The application can also specify a context associated with a given operation that +will be retained during async operation and can be retrieved via the completion +event. + +Results of an asynchronous session will be posted as completion events to the +session’s completion queue, which can be accessed directly or via the ODP +scheduler. The completion event contains the status of the operation and the +result. The application has the responsibility to free the completion event. + +=== Random number Generation + +ODP provides an API `odp_random_data()` to generate random data bytes. It has +an argument to specify whether to use system entropy source for random number +generation or not. + +=== Capability inquiries + +ODP provides an API interface `odp_crypto_capability()` to inquire implementation’s +crypto capabilities. This interface returns a bitmask for supported algorithms +and hardware backed algorithms. diff --git a/doc/users-guide/users-guide.adoc b/doc/users-guide/users-guide.adoc index 9a427fa..41c57d1 100755 --- a/doc/users-guide/users-guide.adoc +++ b/doc/users-guide/users-guide.adoc @@ -1018,77 +1018,7 @@ include::users-guide-pktio.adoc[]
include::users-guide-timer.adoc[]
-== Cryptographic services - -ODP provides APIs to perform cryptographic operations required by various -communication protocols (e.g. IPSec). ODP cryptographic APIs are session based. - -ODP provides APIs for following cryptographic services: - -* Ciphering -* Authentication/data integrity via Keyed-Hashing (HMAC) -* Random number generation -* Crypto capability inquiries - -=== Crypto Sessions - -To apply a cryptographic operation to a packet a session must be created. All -packets processed by a session share the parameters that define the session. - -ODP supports synchronous and asynchronous crypto sessions. For asynchronous -sessions, the output of crypto operation is posted in a queue defined as -the completion queue in its session parameters. - -ODP crypto APIs support chained operation sessions in which hashing and ciphering -both can be achieved using a single session and operation call. The order of -cipher and hashing can be controlled by the `auth_cipher_text` session parameter. - -Other Session parameters include algorithms, keys, initialization vector -(optional), encode or decode, output queue for async mode and output packet pool -for allocation of an output packet if required. - -=== Crypto operations - -After session creation, a cryptographic operation can be applied to a packet -using the `odp_crypto_operation()` API. Applications may indicate a preference -for synchronous or asynchronous processing in the session's `pref_mode` parameter. -However crypto operations may complete synchronously even if an asynchronous -preference is indicated, and applications must examine the `posted` output -parameter from `odp_crypto_operation()` to determine whether the operation has -completed or if an `ODP_EVENT_CRYPTO_COMPL` notification is expected. In the case -of an async operation, the `posted` output parameter will be set to true. - - -The operation arguments specify for each packet the areas that are to be -encrypted or decrypted and authenticated. Also, there is an option of overriding -the initialization vector specified in session parameters. - -An operation can be executed in in-place, out-of-place or new buffer mode. -In in-place mode output packet is same as the input packet. -In case of out-of-place mode output packet is different from input packet as -specified by the application, while in new buffer mode implementation allocates -a new output buffer from the session’s output pool. - -The application can also specify a context associated with a given operation that -will be retained during async operation and can be retrieved via the completion -event. - -Results of an asynchronous session will be posted as completion events to the -session’s completion queue, which can be accessed directly or via the ODP -scheduler. The completion event contains the status of the operation and the -result. The application has the responsibility to free the completion event. - -=== Random number Generation - -ODP provides an API `odp_random_data()` to generate random data bytes. It has -an argument to specify whether to use system entropy source for random number -generation or not. - -=== Capability inquiries - -ODP provides an API interface `odp_crypto_capability()` to inquire implementation’s -crypto capabilities. This interface returns a bitmask for supported algorithms -and hardware backed algorithms. +include::users-guide-crypto.adoc[]
include::users-guide-tm.adoc[]
commit 2b36166d647c64aa545e6ecc23a1d464fcd2c3c0 Author: Bill Fischofer bill.fischofer@linaro.org Date: Mon Dec 12 09:06:15 2016 -0600
api: random: add explicit controls over random data
Rework the odp_random_data() API to replace the use_entropy with an explicit odp_random_kind parameter that controls the type of random desired. Two new APIs are also introduced:
- odp_random_max_kind() returns the maximum kind of random data available
- odp_random_test_data() permits applications to generate repeatable random sequences for testing purposes
Signed-off-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/random.h b/include/odp/api/spec/random.h index 00fa15b..4765475 100644 --- a/include/odp/api/spec/random.h +++ b/include/odp/api/spec/random.h @@ -24,18 +24,82 @@ extern "C" { */
/** + * Random kind selector + * + * The kind of random denotes the statistical quality of the random data + * returned. Basic random simply appears uniformly distributed, Cryptographic + * random is statistically random and suitable for use by cryptographic + * functions. True random is generated from a hardware entropy source rather + * than an algorithm and is thus completely unpredictable. These form a + * hierarchy where higher quality data is presumably more costly to generate + * than lower quality data. + */ +typedef enum { + /** Basic random, presumably pseudo-random generated by SW. This + * is the lowest kind of random */ + ODP_RANDOM_BASIC, + /** Cryptographic quality random */ + ODP_RANDOM_CRYPTO, + /** True random, generated from a HW entropy source. This is the + * highest kind of random */ + ODP_RANDOM_TRUE, +} odp_random_kind_t; + +/** + * Query random max kind + * + * Implementations support the returned max kind and all kinds weaker than it. + * + * @return kind The maximum odp_random_kind_t supported by this implementation + */ +odp_random_kind_t odp_random_max_kind(void); + +/** * Generate random byte data * + * The intent in supporting different kinds of random data is to allow + * tradeoffs between performance and the quality of random data needed. The + * assumption is that basic random is cheap while true random is relatively + * expensive in terms of time to generate, with cryptographic random being + * something in between. Implementations that support highly efficient true + * random are free to use this for all requested kinds. So it is always + * permissible to "upgrade" a random data request, but never to "downgrade" + * such requests. + * * @param[out] buf Output buffer - * @param size Size of output buffer - * @param use_entropy Use entropy + * @param len Length of output buffer in bytes + * @param kind Specifies the type of random data required. Request + * is expected to fail if the implementation is unable to + * provide the requested type. + * + * @return Number of bytes written + * @retval <0 on failure + */ +int32_t odp_random_data(uint8_t *buf, uint32_t len, odp_random_kind_t kind); + +/** + * Generate repeatable random data for testing purposes + * + * For testing purposes it is often useful to generate "random" sequences that + * are repeatable. This is accomplished by supplying a seed value that is used + * for pseudo-random data generation. The caller-provided seed value is + * updated for each call to continue the sequence. Restarting a series of + * calls with the same initial seed value will generate the same sequence of + * random test data. + * + * This function returns data of ODP_RANDOM_BASIC quality and should be used + * only for testing purposes. Use odp_random_data() for production. * - * @todo Define the implication of the use_entropy parameter + * @param[out] buf Output buffer + * @param len Length of output buffer in bytes + * @param[in,out] seed Seed value to use. This must be a thread-local + * variable. Results are undefined if multiple threads + * call this routine with the same seed variable. * * @return Number of bytes written * @retval <0 on failure */ -int32_t odp_random_data(uint8_t *buf, int32_t size, odp_bool_t use_entropy); +int32_t odp_random_test_data(uint8_t *buf, uint32_t len, uint64_t *seed);
/** * @} diff --git a/platform/linux-generic/odp_crypto.c b/platform/linux-generic/odp_crypto.c index 6b7d60e..5808d16 100644 --- a/platform/linux-generic/odp_crypto.c +++ b/platform/linux-generic/odp_crypto.c @@ -4,6 +4,7 @@ * SPDX-License-Identifier: BSD-3-Clause */
+#include <odp_posix_extensions.h> #include <odp/api/crypto.h> #include <odp_internal.h> #include <odp/api/atomic.h> @@ -19,6 +20,7 @@ #include <odp_packet_internal.h>
#include <string.h> +#include <stdlib.h>
#include <openssl/des.h> #include <openssl/rand.h> @@ -999,12 +1001,48 @@ int odp_crypto_term_global(void) return rc; }
-int32_t -odp_random_data(uint8_t *buf, int32_t len, odp_bool_t use_entropy ODP_UNUSED) +odp_random_kind_t odp_random_max_kind(void) { - int32_t rc; - rc = RAND_bytes(buf, len); - return (1 == rc) ? len /*success*/: -1 /*failure*/; + return ODP_RANDOM_CRYPTO; +} + +int32_t odp_random_data(uint8_t *buf, uint32_t len, odp_random_kind_t kind) +{ + int rc; + + switch (kind) { + case ODP_RANDOM_BASIC: + RAND_pseudo_bytes(buf, len); + return len; + + case ODP_RANDOM_CRYPTO: + rc = RAND_bytes(buf, len); + return (1 == rc) ? (int)len /*success*/: -1 /*failure*/; + + case ODP_RANDOM_TRUE: + default: + return -1; + } +} + +int32_t odp_random_test_data(uint8_t *buf, uint32_t len, uint64_t *seed) +{ + union { + uint32_t rand_word; + uint8_t rand_byte[4]; + } u; + uint32_t i = 0, j; + uint32_t seed32 = (*seed) & 0xffffffff; + + while (i < len) { + u.rand_word = rand_r(&seed32); + + for (j = 0; j < 4 && i < len; j++, i++) + *buf++ = u.rand_byte[j]; + } + + *seed = seed32; + return len; }
odp_crypto_compl_t odp_crypto_compl_from_event(odp_event_t ev) diff --git a/test/common_plat/validation/api/random/random.c b/test/common_plat/validation/api/random/random.c index 7572366..a0e2ef7 100644 --- a/test/common_plat/validation/api/random/random.c +++ b/test/common_plat/validation/api/random/random.c @@ -13,12 +13,58 @@ void random_test_get_size(void) int32_t ret; uint8_t buf[32];
- ret = odp_random_data(buf, sizeof(buf), false); + ret = odp_random_data(buf, sizeof(buf), ODP_RANDOM_BASIC); CU_ASSERT(ret == sizeof(buf)); }
+void random_test_kind(void) +{ + int32_t rc; + uint8_t buf[4096]; + uint32_t buf_size = sizeof(buf); + odp_random_kind_t max_kind = odp_random_max_kind(); + + rc = odp_random_data(buf, buf_size, max_kind); + CU_ASSERT(rc > 0); + + switch (max_kind) { + case ODP_RANDOM_BASIC: + rc = odp_random_data(buf, 4, ODP_RANDOM_CRYPTO); + CU_ASSERT(rc < 0); + /* Fall through */ + + case ODP_RANDOM_CRYPTO: + rc = odp_random_data(buf, 4, ODP_RANDOM_TRUE); + CU_ASSERT(rc < 0); + break; + + default: + break; + } +} + +void random_test_repeat(void) +{ + uint8_t buf1[1024]; + uint8_t buf2[1024]; + int32_t rc; + uint64_t seed1 = 12345897; + uint64_t seed2 = seed1; + + rc = odp_random_test_data(buf1, sizeof(buf1), &seed1); + CU_ASSERT(rc == sizeof(buf1)); + + rc = odp_random_test_data(buf2, sizeof(buf2), &seed2); + CU_ASSERT(rc == sizeof(buf2)); + + CU_ASSERT(seed1 == seed2); + CU_ASSERT(memcmp(buf1, buf2, sizeof(buf1)) == 0); +} + odp_testinfo_t random_suite[] = { ODP_TEST_INFO(random_test_get_size), + ODP_TEST_INFO(random_test_kind), + ODP_TEST_INFO(random_test_repeat), ODP_TEST_INFO_NULL, };
diff --git a/test/common_plat/validation/api/random/random.h b/test/common_plat/validation/api/random/random.h index 26202cc..c4bca78 100644 --- a/test/common_plat/validation/api/random/random.h +++ b/test/common_plat/validation/api/random/random.h @@ -11,6 +11,8 @@
/* test functions: */ void random_test_get_size(void); +void random_test_kind(void); +void random_test_repeat(void);
/* test arrays: */ extern odp_testinfo_t random_suite[];
commit d3a7028a5708506b63dc6a06846cd05c7552bbf4 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:31 2016 +0100
linux-gen: _ishm: cleaning remaining block at odp_term_global
Remaining (forgotten, not freed) blocks are gathered and related files cleaned when odp_term_global() is called. An error message is also issued so the application writters get to know about these blocks
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 92575bc..a0188ad 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -1507,12 +1507,25 @@ int _odp_ishm_term_local(void) int _odp_ishm_term_global(void) { int ret = 0; + int index; + ishm_block_t *block;
if ((getpid() != odp_global_data.main_pid) || (syscall(SYS_gettid) != getpid())) ODP_ERR("odp_term_global() must be performed by the main " "ODP process!\n.");
+ /* cleanup possibly non freed memory (and complain a bit): */ + for (index = 0; index < ISHM_MAX_NB_BLOCKS; index++) { + block = &ishm_tbl->block[index]; + if (block->len != 0) { + ODP_ERR("block '%s' (file %s) was never freed " + "(cleaning up...).\n", + block->name, block->filename); + delete_file(block); + } + } + /* perform the last thread terminate which was postponed: */ ret = do_odp_ishm_term_local();
commit 4208ef3e2ce9f6093ae540af7de20759849782b6 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:22 2016 +0100
linux-gen: _ishm: allow memory alloc/free at global init/term
_ishm.c assumed that both _ishm_init_global() and _ishm_init_local() had been run to work properly. This assumption turns out the be a problem if _ishm is to be used as main memory allocator, as many modules init_global functions assume the availability of the odp_reserve() function before any init_local function is called. Likewise, many term_global() functions assume the availability of the odp_shm_free() function after all odp_term_local() have run. This patch runs _ishm_init_local() in advance for the main ODP thread and postpones the execution of_ishm_term_local() to init_global() time for the main process, hence making the ishm_reserve() and ishm_free() functions available at init_global/term_global time.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 33229e8..92575bc 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -157,7 +157,6 @@ typedef struct ishm_block { char name[ISHM_NAME_MAXLEN]; /* name for the ishm block (if any) */ char filename[ISHM_FILENAME_MAXLEN]; /* name of the .../odp-* file */ char exptname[ISHM_FILENAME_MAXLEN]; /* name of the export file */ - int main_odpthread; /* The thread which did the initial reserve*/ uint32_t user_flags; /* any flags the user want to remember. */ uint32_t flags; /* block creation flags. */ uint64_t user_len; /* length, as requested at reserve time. */ @@ -179,6 +178,7 @@ typedef struct ishm_block { typedef struct { odp_spinlock_t lock; uint64_t dev_seq; /* used when creating device names */ + uint32_t odpthread_cnt; /* number of running ODP threads */ ishm_block_t block[ISHM_MAX_NB_BLOCKS]; } ishm_table_t; static ishm_table_t *ishm_tbl; @@ -879,7 +879,6 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, new_block->user_flags = user_flags; new_block->seq++; new_block->refcnt = 1; - new_block->main_odpthread = odp_thread_id(); new_block->start = addr; /* only for SINGLE_VA*/
/* the allocation succeeded: update the process local view */ @@ -999,10 +998,8 @@ static int block_free(int block_index)
proc_index = procfind_block(block_index); if (proc_index >= 0) { - /* close the fd, unless if it was externaly provided */ - if ((block->filename[0] != 0) || - (odp_thread_id() != block->main_odpthread)) - close(ishm_proctable->entry[proc_index].fd); + /* close the related fd */ + close(ishm_proctable->entry[proc_index].fd);
/* remove the mapping and possible fragment */ do_unmap(ishm_proctable->entry[proc_index].start, @@ -1293,12 +1290,62 @@ int _odp_ishm_info(int block_index, _odp_ishm_info_t *info) return 0; }
+static int do_odp_ishm_init_local(void) +{ + int i; + int block_index; + + /* + * the ishm_process table is local to each linux process + * Check that no other linux threads (of same or ancestor processes) + * have already created the table, and create it if needed. + * We protect this with the general ishm lock to avoid + * init race condition of different running threads. + */ + odp_spinlock_lock(&ishm_tbl->lock); + ishm_tbl->odpthread_cnt++; /* count ODPthread (pthread or process) */ + if (!ishm_proctable) { + ishm_proctable = malloc(sizeof(ishm_proctable_t)); + if (!ishm_proctable) { + odp_spinlock_unlock(&ishm_tbl->lock); + return -1; + } + memset(ishm_proctable, 0, sizeof(ishm_proctable_t)); + } + if (syscall(SYS_gettid) != getpid()) + ishm_proctable->thrd_refcnt++; /* new linux thread */ + else + ishm_proctable->thrd_refcnt = 1;/* new linux process */ + + /* + * if this ODP thread is actually a new linux process, (as opposed + * to a pthread), i.e, we just forked, then all shmem blocks + * of the parent process are mapped into this child by inheritance. + * (The process local table is inherited as well). We hence have to + * increase the process refcount for each of the inherited mappings: + */ + if (syscall(SYS_gettid) == getpid()) { + for (i = 0; i < ishm_proctable->nb_entries; i++) { + block_index = ishm_proctable->entry[i].block_index; + ishm_tbl->block[block_index].refcnt++; + } + } + + odp_spinlock_unlock(&ishm_tbl->lock); + return 0; +} + int _odp_ishm_init_global(void) { void *addr; void *spce_addr; int i;
+ if ((getpid() != odp_global_data.main_pid) || + (syscall(SYS_gettid) != getpid())) + ODP_ERR("odp_init_global() must be performed by the main " + "ODP process!\n."); + if (!odp_global_data.hugepage_info.default_huge_page_dir) ODP_DBG("NOTE: No support for huge pages\n"); else @@ -1315,6 +1362,7 @@ int _odp_ishm_init_global(void) ishm_tbl = addr; memset(ishm_tbl, 0, sizeof(ishm_table_t)); ishm_tbl->dev_seq = 0; + ishm_tbl->odpthread_cnt = 0; odp_spinlock_init(&ishm_tbl->lock);
/* allocate space for the internal shared mem fragment table: */ @@ -1355,7 +1403,13 @@ int _odp_ishm_init_global(void) ishm_ftbl->fragment[ISHM_NB_FRAGMNTS - 1].next = NULL; ishm_ftbl->unused_fragmnts = &ishm_ftbl->fragment[1];
- return 0; + /* + * We run _odp_ishm_init_local() directely here to give the + * possibility to run shm_reserve() before the odp_init_local() + * is performed for the main thread... Many init_global() functions + * indeed assume the availability of odp_shm_reserve()...: + */ + return do_odp_ishm_init_local();
init_glob_err3: if (munmap(ishm_ftbl, sizeof(ishm_ftable_t)) < 0) @@ -1369,80 +1423,28 @@ init_glob_err1:
int _odp_ishm_init_local(void) { - int i; - int block_index; - /* - * the ishm_process table is local to each linux process - * Check that no other linux threads (of same or ancestor processes) - * have already created the table, and create it if needed. - * We protect this with the general ishm lock to avoid - * init race condition of different running threads. + * Do not re-run this for the main ODP process, as it has already + * been done in advance at _odp_ishm_init_global() time: */ - odp_spinlock_lock(&ishm_tbl->lock); - if (!ishm_proctable) { - ishm_proctable = malloc(sizeof(ishm_proctable_t)); - if (!ishm_proctable) { - odp_spinlock_unlock(&ishm_tbl->lock); - return -1; - } - memset(ishm_proctable, 0, sizeof(ishm_proctable_t)); - } - if (syscall(SYS_gettid) != getpid()) - ishm_proctable->thrd_refcnt++; /* new linux thread */ - else - ishm_proctable->thrd_refcnt = 1;/* new linux process */ + if ((getpid() == odp_global_data.main_pid) && + (syscall(SYS_gettid) == getpid())) + return 0;
- /* - * if this ODP thread is actually a new linux process, (as opposed - * to a pthread), i.e, we just forked, then all shmem blocks - * of the parent process are mapped into this child by inheritance. - * (The process local table is inherited as well). We hence have to - * increase the process refcount for each of the inherited mappings: - */ - if (syscall(SYS_gettid) == getpid()) { - for (i = 0; i < ishm_proctable->nb_entries; i++) { - block_index = ishm_proctable->entry[i].block_index; - ishm_tbl->block[block_index].refcnt++; - } - } - - odp_spinlock_unlock(&ishm_tbl->lock); - return 0; + return do_odp_ishm_init_local(); }
-int _odp_ishm_term_global(void) -{ - int ret = 0; - - /* free the fragment table */ - if (munmap(ishm_ftbl, sizeof(ishm_ftable_t)) < 0) { - ret = -1; - ODP_ERR("unable to munmap fragment table\n."); - } - /* free the block table */ - if (munmap(ishm_tbl, sizeof(ishm_table_t)) < 0) { - ret = -1; - ODP_ERR("unable to munmap main table\n."); - } - - /* free the reserved VA space */ - if (_odp_ishmphy_unbook_va()) - ret = -1; - - return ret; -} - -int _odp_ishm_term_local(void) +static int do_odp_ishm_term_local(void) { int i; int proc_table_refcnt = 0; int block_index; ishm_block_t *block;
- odp_spinlock_lock(&ishm_tbl->lock); procsync();
+ ishm_tbl->odpthread_cnt--; /* decount ODPthread (pthread or process) */ + /* * The ishm_process table is local to each linux process * Check that no other linux threads (of this linux process) @@ -1482,10 +1484,56 @@ int _odp_ishm_term_local(void) ishm_proctable = NULL; }
- odp_spinlock_unlock(&ishm_tbl->lock); return 0; }
+int _odp_ishm_term_local(void) +{ + int ret; + + odp_spinlock_lock(&ishm_tbl->lock); + + /* postpone last thread term to allow free() by global term functions:*/ + if (ishm_tbl->odpthread_cnt == 1) { + odp_spinlock_unlock(&ishm_tbl->lock); + return 0; + } + + ret = do_odp_ishm_term_local(); + odp_spinlock_unlock(&ishm_tbl->lock); + return ret; +} + +int _odp_ishm_term_global(void) +{ + int ret = 0; + + if ((getpid() != odp_global_data.main_pid) || + (syscall(SYS_gettid) != getpid())) + ODP_ERR("odp_term_global() must be performed by the main " + "ODP process!\n."); + + /* perform the last thread terminate which was postponed: */ + ret = do_odp_ishm_term_local(); + + /* free the fragment table */ + if (munmap(ishm_ftbl, sizeof(ishm_ftable_t)) < 0) { + ret |= -1; + ODP_ERR("unable to munmap fragment table\n."); + } + /* free the block table */ + if (munmap(ishm_tbl, sizeof(ishm_table_t)) < 0) { + ret |= -1; + ODP_ERR("unable to munmap main table\n."); + } + + /* free the reserved VA space */ + if (_odp_ishmphy_unbook_va()) + ret |= -1; + + return ret; +} + /* * Print the current ishm status (allocated blocks and VA space map) * Return the number of allocated blocks (including those not mapped @@ -1541,13 +1589,12 @@ int _odp_ishm_status(const char *title) huge = '?'; } proc_index = procfind_block(i); - ODP_DBG("%-3d: name:%-.24s file:%-.24s tid:%-3d" + ODP_DBG("%-3d: name:%-.24s file:%-.24s" " flags:%s,%c len:0x%-08lx" " user_len:%-8ld seq:%-3ld refcnt:%-4d\n", i, ishm_tbl->block[i].name, ishm_tbl->block[i].filename, - ishm_tbl->block[i].main_odpthread, flags, huge, ishm_tbl->block[i].len, ishm_tbl->block[i].user_len,
commit dba0f42c9e24d090b29df165f610fc0df051b018 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:19 2016 +0100
linux-gen: _ishm: fix for alignment request matching page size
There is no reason to toggle the _ODP_ISHM_SINGLE_VA flag when the alignment exactly matches the page size. This just results in wasting the odp-common virtual space
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 40bfb96..33229e8 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -824,7 +824,7 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, * the same address every where, otherwise alignment may be * be wrong for some process */ hp_align = align; - if (hp_align < odp_sys_huge_page_size()) + if (hp_align <= odp_sys_huge_page_size()) hp_align = odp_sys_huge_page_size(); else flags |= _ODP_ISHM_SINGLE_VA; @@ -852,7 +852,7 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, * size then we have to make sure the block will be mapped at * the same address every where, otherwise alignment may be * be wrong for some process */ - if (align < odp_sys_page_size()) + if (align <= odp_sys_page_size()) align = odp_sys_page_size(); else flags |= _ODP_ISHM_SINGLE_VA;
commit 46b88467e667e26fed282b234c481a14fcecff62 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:32 2016 +0100
linux_gen: _ishm: decreasing the number of error messages when no huge pages
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 82478e5..40bfb96 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -439,8 +439,12 @@ static int create_file(int block_index, huge_flag_t huge, uint64_t len,
fd = open(filename, oflag, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); if (fd < 0) { - ODP_ERR("open failed for %s: %s.\n", - filename, strerror(errno)); + if (huge == HUGE) + ODP_DBG("open failed for %s: %s.\n", + filename, strerror(errno)); + else + ODP_ERR("open failed for %s: %s.\n", + filename, strerror(errno)); return -1; }
@@ -762,6 +766,7 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, void *addr = NULL; /* mapping address */ int new_proc_entry; struct stat statbuf; + static int huge_error_printed; /* to avoid millions of error...*/
odp_spinlock_lock(&ishm_tbl->lock);
@@ -828,11 +833,16 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, len = (size + (page_hp_size - 1)) & (-page_hp_size); addr = do_map(new_index, len, hp_align, flags, HUGE, &fd);
- if (addr == NULL) - ODP_DBG("No huge pages, fall back to normal pages, " - "check: /proc/sys/vm/nr_hugepages.\n"); - else + if (addr == NULL) { + if (!huge_error_printed) { + ODP_ERR("No huge pages, fall back to normal " + "pages. " + "check: /proc/sys/vm/nr_hugepages.\n"); + huge_error_printed = 1; + } + } else { new_block->huge = HUGE; + } }
/* Try normal pages if huge pages failed */
commit f195caa92ef8457c2c670fd3449ea6521e7ad823 Author: Christophe Milard christophe.milard@linaro.org Date: Tue Nov 8 10:49:28 2016 +0100
linux-gen: _ishm: accept multiple usage of same block name
This is following the request that using the same name for multiple memory blocks should be allowed on north API. The change made here will affect any _ishm users (i.e. both north and south API), which is probably better for consistency.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 88282ae..82478e5 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -772,14 +772,6 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, page_sz = odp_sys_page_size(); page_hp_size = odp_sys_huge_page_size();
- /* check if name already exists */ - if (name && (find_block_by_name(name) >= 0)) { - /* Found a block with the same name */ - odp_spinlock_unlock(&ishm_tbl->lock); - ODP_ERR("name "%s" already used.\n", name); - return -1; - } - /* grab a new entry: */ for (new_index = 0; new_index < ISHM_MAX_NB_BLOCKS; new_index++) { if (ishm_tbl->block[new_index].len == 0) {
commit 7f97683b1afc4826825f0db0fcac40858892494a Author: Christophe Milard christophe.milard@linaro.org Date: Mon Dec 5 19:26:55 2016 +0100
linux-gen: shared_memory: remove flag forcing mlock
The _ishm flag _ODP_ISHM_LOCK is no longer set when doing shm_reserve(), hence enabling non-root user to exceed the 64MB mlock memory limit (ulimit).
Signed-off-by: Christophe Milard christophe.milard@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_shared_memory.c b/platform/linux-generic/odp_shared_memory.c index d2bb74c..ba32dee 100644 --- a/platform/linux-generic/odp_shared_memory.c +++ b/platform/linux-generic/odp_shared_memory.c @@ -58,9 +58,6 @@ odp_shm_t odp_shm_reserve(const char *name, uint64_t size, uint64_t align,
flgs = get_ishm_flags(flags);
- /* all mem reserved through this interface is requested to be locked: */ - flgs |= (flags & _ODP_ISHM_LOCK); - block_index = _odp_ishm_reserve(name, size, -1, align, flgs, flags); if (block_index >= 0) return to_handle(block_index);
commit 52592157f25c9e2e3876dc3624cf91b1b71127ad Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:29 2016 +0100
linux-gen: shm: add flag and function to share memory between ODP instances
Implemented by calling the related functions from _ishm.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_shared_memory.c b/platform/linux-generic/odp_shared_memory.c index 2377f16..d2bb74c 100644 --- a/platform/linux-generic/odp_shared_memory.c +++ b/platform/linux-generic/odp_shared_memory.c @@ -24,6 +24,21 @@ static inline odp_shm_t to_handle(uint32_t index) return _odp_cast_scalar(odp_shm_t, index + 1); }
+static uint32_t get_ishm_flags(uint32_t flags) +{ + uint32_t f = 0; /* internal ishm flags */ + + /* set internal ishm flags according to API flags: + * note that both ODP_SHM_PROC and ODP_SHM_EXPORT maps to + * _ODP_ISHM_LINK as in the linux-gen implementation there is + * no difference between exporting to another ODP instance or + * another linux process */ + f |= (flags & (ODP_SHM_PROC | ODP_SHM_EXPORT)) ? _ODP_ISHM_EXPORT : 0; + f |= (flags & ODP_SHM_SINGLE_VA) ? _ODP_ISHM_SINGLE_VA : 0; + + return f; +} + int odp_shm_capability(odp_shm_capability_t *capa) { memset(capa, 0, sizeof(odp_shm_capability_t)); @@ -41,9 +56,7 @@ odp_shm_t odp_shm_reserve(const char *name, uint64_t size, uint64_t align, int block_index; int flgs = 0; /* internal ishm flags */
- /* set internal ishm flags according to API flags: */ - flgs |= (flags & ODP_SHM_PROC) ? _ODP_ISHM_EXPORT : 0; - flgs |= (flags & ODP_SHM_SINGLE_VA) ? _ODP_ISHM_SINGLE_VA : 0; + flgs = get_ishm_flags(flags);
/* all mem reserved through this interface is requested to be locked: */ flgs |= (flags & _ODP_ISHM_LOCK); @@ -55,6 +68,18 @@ odp_shm_t odp_shm_reserve(const char *name, uint64_t size, uint64_t align, return ODP_SHM_INVALID; }
+odp_shm_t odp_shm_import(const char *remote_name, + odp_instance_t odp_inst, + const char *local_name) +{ + int ret; + + ret = _odp_ishm_find_exported(remote_name, (pid_t)odp_inst, + local_name); + + return to_handle(ret); +} + int odp_shm_free(odp_shm_t shm) { return _odp_ishm_free_by_index(from_handle(shm));
commit 4a320c0af291c07f33b1a295f72704215169d562 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:26 2016 +0100
linux-gen: shm: new ODP_SHM_SINGLE_VA flag implementation
This flag guarentess the unicity the the block address on all ODP threads. The patch just exposes the _ODP_ISHM_SINGLE_VA flag of the internal memory allocator, ishm.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_shared_memory.c b/platform/linux-generic/odp_shared_memory.c index 9e916e9..2377f16 100644 --- a/platform/linux-generic/odp_shared_memory.c +++ b/platform/linux-generic/odp_shared_memory.c @@ -43,6 +43,7 @@ odp_shm_t odp_shm_reserve(const char *name, uint64_t size, uint64_t align,
/* set internal ishm flags according to API flags: */ flgs |= (flags & ODP_SHM_PROC) ? _ODP_ISHM_EXPORT : 0; + flgs |= (flags & ODP_SHM_SINGLE_VA) ? _ODP_ISHM_SINGLE_VA : 0;
/* all mem reserved through this interface is requested to be locked: */ flgs |= (flags & _ODP_ISHM_LOCK);
commit f5cde8870a425d51d08df7a1b4b3fb1cf06406f0 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:28 2016 +0100
linux-gen: _ishm: adding function to map memory from other ODP
functionality to export and map memory between ODP instance is added: This includes: - a bit of simplification in _odp_ishm_reserve() for externaly provided file descriptors. - a new function, _odp_ishm_find_exported() to map memory from other ODP instances (On same OS)
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 37e56d4..88282ae 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -152,6 +152,7 @@ typedef struct ishm_fragment { * will allocate both a block and a fragment. * Blocks contain only global data common to all processes. */ +typedef enum {UNKNOWN, HUGE, NORMAL, EXTERNAL} huge_flag_t; typedef struct ishm_block { char name[ISHM_NAME_MAXLEN]; /* name for the ishm block (if any) */ char filename[ISHM_FILENAME_MAXLEN]; /* name of the .../odp-* file */ @@ -163,7 +164,7 @@ typedef struct ishm_block { void *start; /* only valid if _ODP_ISHM_SINGLE_VA is set*/ uint64_t len; /* length. multiple of page size. 0 if free*/ ishm_fragment_t *fragment; /* used when _ODP_ISHM_SINGLE_VA is used */ - int huge; /* true if this segment is mapped using huge pages */ + huge_flag_t huge; /* page type: external means unknown here. */ uint64_t seq; /* sequence number, incremented on alloc and free */ uint64_t refcnt;/* number of linux processes mapping this block */ } ishm_block_t; @@ -400,7 +401,7 @@ static void free_fragment(ishm_fragment_t *fragmnt) * or /mnt/huge/odp-<pid>-<sequence_or_name> (for huge pages) * Return the new file descriptor, or -1 on error. */ -static int create_file(int block_index, int huge, uint64_t len, +static int create_file(int block_index, huge_flag_t huge, uint64_t len, uint32_t flags, uint32_t align) { char *name; @@ -419,10 +420,11 @@ static int create_file(int block_index, int huge, uint64_t len, ishm_tbl->dev_seq++);
/* huge dir must be known to create files there!: */ - if (huge && !odp_global_data.hugepage_info.default_huge_page_dir) + if ((huge == HUGE) && + (!odp_global_data.hugepage_info.default_huge_page_dir)) return -1;
- if (huge) + if (huge == HUGE) snprintf(filename, ISHM_FILENAME_MAXLEN, ISHM_FILENAME_FORMAT, odp_global_data.hugepage_info.default_huge_page_dir, @@ -502,7 +504,7 @@ static void delete_file(ishm_block_t *block) * Mutex must be assured by the caller. */ static void *do_map(int block_index, uint64_t len, uint32_t align, - uint32_t flags, int huge, int *fd) + uint32_t flags, huge_flag_t huge, int *fd) { ishm_block_t *new_block; /* entry in the main block table */ void *addr = NULL; @@ -552,8 +554,6 @@ static void *do_map(int block_index, uint64_t len, uint32_t align, return NULL; }
- new_block->huge = huge; - return mapped_addr; }
@@ -756,27 +756,21 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, int new_index; /* index in the main block table*/ ishm_block_t *new_block; /* entry in the main block table*/ uint64_t page_sz; /* normal page size. usually 4K*/ - uint64_t alloc_size; /* includes extra for alignement*/ uint64_t page_hp_size; /* huge page size */ - uint64_t alloc_hp_size; /* includes extra for alignement*/ uint32_t hp_align; uint64_t len; /* mapped length */ void *addr = NULL; /* mapping address */ int new_proc_entry; - - page_sz = odp_sys_page_size(); + struct stat statbuf;
odp_spinlock_lock(&ishm_tbl->lock);
/* update this process view... */ procsync();
- /* roundup to page size */ - alloc_size = (size + (page_sz - 1)) & (-page_sz); - + /* Get system page sizes: page_hp_size is 0 if no huge page available*/ + page_sz = odp_sys_page_size(); page_hp_size = odp_sys_huge_page_size(); - /* roundup to page size */ - alloc_hp_size = (size + (page_hp_size - 1)) & (-page_hp_size);
/* check if name already exists */ if (name && (find_block_by_name(name) >= 0)) { @@ -809,8 +803,24 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, else new_block->name[0] = 0;
- /* Try first huge pages when possible and needed: */ - if (page_hp_size && (alloc_size > page_sz)) { + /* If a file descriptor is provided, get the real size and map: */ + if (fd >= 0) { + fstat(fd, &statbuf); + len = statbuf.st_size; + /* note that the huge page flag is meningless here as huge + * page is determined by the provided file descriptor: */ + addr = do_map(new_index, len, align, flags, EXTERNAL, &fd); + if (addr == NULL) { + close(fd); + odp_spinlock_unlock(&ishm_tbl->lock); + ODP_ERR("_ishm_reserve failed.\n"); + return -1; + } + new_block->huge = EXTERNAL; + } + + /* Otherwise, Try first huge pages when possible and needed: */ + if ((fd < 0) && page_hp_size && (size > page_sz)) { /* at least, alignment in VA should match page size, but user * can request more: If the user requirement exceeds the page * size then we have to make sure the block will be mapped at @@ -821,18 +831,20 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, hp_align = odp_sys_huge_page_size(); else flags |= _ODP_ISHM_SINGLE_VA; - len = alloc_hp_size; - addr = do_map(new_index, len, hp_align, flags, 1, &fd); + + /* roundup to page size */ + len = (size + (page_hp_size - 1)) & (-page_hp_size); + addr = do_map(new_index, len, hp_align, flags, HUGE, &fd);
if (addr == NULL) ODP_DBG("No huge pages, fall back to normal pages, " "check: /proc/sys/vm/nr_hugepages.\n"); else - new_block->huge = 1; + new_block->huge = HUGE; }
- /* try normal pages if huge pages failed */ - if (addr == NULL) { + /* Try normal pages if huge pages failed */ + if (fd < 0) { /* at least, alignment in VA should match page size, but user * can request more: If the user requirement exceeds the page * size then we have to make sure the block will be mapped at @@ -843,13 +855,14 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, else flags |= _ODP_ISHM_SINGLE_VA;
- len = alloc_size; - addr = do_map(new_index, len, align, flags, 0, &fd); - new_block->huge = 0; + /* roundup to page size */ + len = (size + (page_sz - 1)) & (-page_sz); + addr = do_map(new_index, len, align, flags, NORMAL, &fd); + new_block->huge = NORMAL; }
/* if neither huge pages or normal pages works, we cannot proceed: */ - if ((addr == NULL) || (len == 0)) { + if ((fd < 0) || (addr == NULL) || (len == 0)) { if ((new_block->filename[0]) && (fd >= 0)) close(fd); odp_spinlock_unlock(&ishm_tbl->lock); @@ -884,6 +897,83 @@ int _odp_ishm_reserve(const char *name, uint64_t size, int fd, }
/* + * Try to map an memory block mapped by another ODP instance into the + * current ODP instance. + * returns 0 on success. + */ +int _odp_ishm_find_exported(const char *remote_name, pid_t external_odp_pid, + const char *local_name) +{ + char export_filename[ISHM_FILENAME_MAXLEN]; + char blockname[ISHM_FILENAME_MAXLEN]; + char filename[ISHM_FILENAME_MAXLEN]; + FILE *export_file; + uint64_t len; + uint32_t flags; + uint32_t align; + int fd; + int ret; + + /* try to read the block description file: */ + snprintf(export_filename, ISHM_FILENAME_MAXLEN, + ISHM_EXPTNAME_FORMAT, + external_odp_pid, + remote_name); + + export_file = fopen(export_filename, "r"); + + if (export_file == NULL) { + ODP_ERR("Error opening %s.\n", export_filename); + return -1; + } + + if (fscanf(export_file, EXPORT_FILE_LINE1_FMT " ") != 0) + goto error_exp_file; + + if (fscanf(export_file, EXPORT_FILE_LINE2_FMT " ", blockname) != 1) + goto error_exp_file; + + if (fscanf(export_file, EXPORT_FILE_LINE3_FMT " ", filename) != 1) + goto error_exp_file; + + if (fscanf(export_file, EXPORT_FILE_LINE4_FMT " ", &len) != 1) + goto error_exp_file; + + if (fscanf(export_file, EXPORT_FILE_LINE5_FMT " ", &flags) != 1) + goto error_exp_file; + + if (fscanf(export_file, EXPORT_FILE_LINE6_FMT " ", &align) != 1) + goto error_exp_file; + + fclose(export_file); + + /* now open the filename given in the description file: */ + fd = open(filename, O_RDWR, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); + if (fd == -1) { + ODP_ERR("open failed for %s: %s.\n", + filename, strerror(errno)); + return -1; + } + + /* clear the _ODP_ISHM_EXPORT flag so we don't export that again*/ + flags &= ~(uint32_t)_ODP_ISHM_EXPORT; + + /* reserve the memory, providing the opened file descriptor: */ + ret = _odp_ishm_reserve(local_name, 0, fd, align, flags, 0); + if (ret < 0) { + close(fd); + return ret; + } + + return ret; + +error_exp_file: + fclose(export_file); + ODP_ERR("Error reading %s.\n", export_filename); + return -1; +} + +/* * Free and unmap internal shared memory: * The file descriptor is closed and the .../odp-* file deleted, * unless fd was externally provided at reserve() time. @@ -1192,7 +1282,7 @@ int _odp_ishm_info(int block_index, _odp_ishm_info_t *info) info->name = ishm_tbl->block[block_index].name; info->addr = ishm_proctable->entry[proc_index].start; info->size = ishm_tbl->block[block_index].user_len; - info->page_size = ishm_tbl->block[block_index].huge ? + info->page_size = (ishm_tbl->block[block_index].huge == HUGE) ? odp_sys_huge_page_size() : odp_sys_page_size(); info->flags = ishm_tbl->block[block_index].flags; info->user_flags = ishm_tbl->block[block_index].user_flags; @@ -1435,7 +1525,19 @@ int _odp_ishm_status(const char *title) flags[1] = (ishm_tbl->block[i].flags & _ODP_ISHM_LOCK) ? 'L' : '.'; flags[2] = 0; - huge = (ishm_tbl->block[i].huge) ? 'H' : '.'; + switch (ishm_tbl->block[i].huge) { + case HUGE: + huge = 'H'; + break; + case NORMAL: + huge = 'N'; + break; + case EXTERNAL: + huge = 'E'; + break; + default: + huge = '?'; + } proc_index = procfind_block(i); ODP_DBG("%-3d: name:%-.24s file:%-.24s tid:%-3d" " flags:%s,%c len:0x%-08lx" diff --git a/platform/linux-generic/include/_ishm_internal.h b/platform/linux-generic/include/_ishm_internal.h index f5de26e..c7c3307 100644 --- a/platform/linux-generic/include/_ishm_internal.h +++ b/platform/linux-generic/include/_ishm_internal.h @@ -11,6 +11,8 @@ extern "C" { #endif
+#include <sys/types.h> + /* flags available at ishm_reserve: */ #define _ODP_ISHM_SINGLE_VA 1 #define _ODP_ISHM_LOCK 2 @@ -36,6 +38,9 @@ int _odp_ishm_free_by_address(void *addr); void *_odp_ishm_lookup_by_index(int block_index); int _odp_ishm_lookup_by_name(const char *name); int _odp_ishm_lookup_by_address(void *addr); +int _odp_ishm_find_exported(const char *remote_name, + pid_t external_odp_pid, + const char *local_name); void *_odp_ishm_address(int block_index); int _odp_ishm_info(int block_index, _odp_ishm_info_t *info); int _odp_ishm_status(const char *title);
commit c3830c3936b76faa423563bbd104e732120f9523 Author: Christophe Milard christophe.milard@linaro.org Date: Sat Aug 20 09:45:59 2016 +0200
linux-gen: ishm: adding debug function
A debug function, printing the internal mem alloc status is added.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Brian Brooks brian.brooks@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index 6ceda80..37e56d4 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -1393,3 +1393,128 @@ int _odp_ishm_term_local(void) odp_spinlock_unlock(&ishm_tbl->lock); return 0; } + +/* + * Print the current ishm status (allocated blocks and VA space map) + * Return the number of allocated blocks (including those not mapped + * by the current odp thread). Also perform a number of sanity check. + * For debug. + */ +int _odp_ishm_status(const char *title) +{ + int i; + char flags[3]; + char huge; + int proc_index; + ishm_fragment_t *fragmnt; + int consecutive_unallocated = 0; /* should never exceed 1 */ + uintptr_t last_address = 0; + ishm_fragment_t *previous = NULL; + int nb_used_frgments = 0; + int nb_unused_frgments = 0; /* nb frag describing a VA area */ + int nb_allocated_frgments = 0; /* nb frag describing an allocated VA */ + int nb_blocks = 0; + int single_va_blocks = 0; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + ODP_DBG("ishm blocks allocated at: %s\n", title); + + /* display block table: 1 line per entry +1 extra line if mapped here */ + for (i = 0; i < ISHM_MAX_NB_BLOCKS; i++) { + if (ishm_tbl->block[i].len <= 0) + continue; /* unused block */ + + nb_blocks++; + if (ishm_tbl->block[i].flags & _ODP_ISHM_SINGLE_VA) + single_va_blocks++; + + flags[0] = (ishm_tbl->block[i].flags & _ODP_ISHM_SINGLE_VA) ? + 'S' : '.'; + flags[1] = (ishm_tbl->block[i].flags & _ODP_ISHM_LOCK) ? + 'L' : '.'; + flags[2] = 0; + huge = (ishm_tbl->block[i].huge) ? 'H' : '.'; + proc_index = procfind_block(i); + ODP_DBG("%-3d: name:%-.24s file:%-.24s tid:%-3d" + " flags:%s,%c len:0x%-08lx" + " user_len:%-8ld seq:%-3ld refcnt:%-4d\n", + i, + ishm_tbl->block[i].name, + ishm_tbl->block[i].filename, + ishm_tbl->block[i].main_odpthread, + flags, huge, + ishm_tbl->block[i].len, + ishm_tbl->block[i].user_len, + ishm_tbl->block[i].seq, + ishm_tbl->block[i].refcnt); + + if (proc_index < 0) + continue; + + ODP_DBG(" start:%-08lx fd:%-3d\n", + ishm_proctable->entry[proc_index].start, + ishm_proctable->entry[proc_index].fd); + } + + /* display the virtual space allocations... : */ + ODP_DBG("ishm virtual space:\n"); + for (fragmnt = ishm_ftbl->used_fragmnts; + fragmnt; fragmnt = fragmnt->next) { + if (fragmnt->block_index >= 0) { + nb_allocated_frgments++; + ODP_DBG(" %08p - %08p: ALLOCATED by block:%d\n", + (uintptr_t)fragmnt->start, + (uintptr_t)fragmnt->start + fragmnt->len - 1, + fragmnt->block_index); + consecutive_unallocated = 0; + } else { + ODP_DBG(" %08p - %08p: NOT ALLOCATED\n", + (uintptr_t)fragmnt->start, + (uintptr_t)fragmnt->start + fragmnt->len - 1); + if (consecutive_unallocated++) + ODP_ERR("defragmentation error\n"); + } + + /* some other sanity checks: */ + if (fragmnt->prev != previous) + ODP_ERR("chaining error\n"); + + if (fragmnt != ishm_ftbl->used_fragmnts) { + if ((uintptr_t)fragmnt->start != last_address + 1) + ODP_ERR("lost space error\n"); + } + + last_address = (uintptr_t)fragmnt->start + fragmnt->len - 1; + previous = fragmnt; + nb_used_frgments++; + } + + /* + * the number of blocks with the single_VA flag set should match + * the number of used fragments: + */ + if (single_va_blocks != nb_allocated_frgments) + ODP_ERR("single_va_blocks != nb_allocated_fragments!\n"); + + /* compute the number of unused fragments*/ + for (fragmnt = ishm_ftbl->unused_fragmnts; + fragmnt; fragmnt = fragmnt->next) + nb_unused_frgments++; + + ODP_DBG("ishm: %d fragment used. %d fragements unused. (total=%d)\n", + nb_used_frgments, nb_unused_frgments, + nb_used_frgments + nb_unused_frgments); + + if ((nb_used_frgments + nb_unused_frgments) != ISHM_NB_FRAGMNTS) + ODP_ERR("lost fragments!\n"); + + if (nb_blocks < ishm_proctable->nb_entries) + ODP_ERR("process known block cannot exceed main total sum!\n"); + + ODP_DBG("\n"); + + odp_spinlock_unlock(&ishm_tbl->lock); + return nb_blocks; +} diff --git a/platform/linux-generic/include/_ishm_internal.h b/platform/linux-generic/include/_ishm_internal.h index d348b41..f5de26e 100644 --- a/platform/linux-generic/include/_ishm_internal.h +++ b/platform/linux-generic/include/_ishm_internal.h @@ -38,6 +38,7 @@ int _odp_ishm_lookup_by_name(const char *name); int _odp_ishm_lookup_by_address(void *addr); void *_odp_ishm_address(int block_index); int _odp_ishm_info(int block_index, _odp_ishm_info_t *info); +int _odp_ishm_status(const char *title);
#ifdef __cplusplus }
commit cc52f0675d1674a80cf1806dc8c1c4e3887afdd1 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:23 2016 +0100
linux-gen: use ishm as north API mem allocator
The odp shared_memory API is changed to use the ODP internal memory allocator: _ishm. _ishm supports memory sharing between processes, regardless of fork time. The test testing the ODP_SHM_PROC flag is also changed to cope with the new OS sharing interface used by _ishm (link in /tmp).
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_internal.h b/platform/linux-generic/include/odp_internal.h index 5698fb0..b313b1f 100644 --- a/platform/linux-generic/include/odp_internal.h +++ b/platform/linux-generic/include/odp_internal.h @@ -59,7 +59,6 @@ enum init_stage { SYSINFO_INIT, FDSERVER_INIT, ISHM_INIT, - SHM_INIT, THREAD_INIT, POOL_INIT, QUEUE_INIT, @@ -89,10 +88,6 @@ int odp_thread_init_local(odp_thread_type_t type); int odp_thread_term_local(void); int odp_thread_term_global(void);
-int odp_shm_init_global(void); -int odp_shm_term_global(void); -int odp_shm_init_local(void); - int odp_pool_init_global(void); int odp_pool_init_local(void); int odp_pool_term_global(void); diff --git a/platform/linux-generic/odp_init.c b/platform/linux-generic/odp_init.c index 43d9e40..1b0d8f8 100644 --- a/platform/linux-generic/odp_init.c +++ b/platform/linux-generic/odp_init.c @@ -120,12 +120,6 @@ int odp_init_global(odp_instance_t *instance, } stage = ISHM_INIT;
- if (odp_shm_init_global()) { - ODP_ERR("ODP shm init failed.\n"); - goto init_failed; - } - stage = SHM_INIT; - if (odp_thread_init_global()) { ODP_ERR("ODP thread init failed.\n"); goto init_failed; @@ -279,13 +273,6 @@ int _odp_term_global(enum init_stage stage) } /* Fall through */
- case SHM_INIT: - if (odp_shm_term_global()) { - ODP_ERR("ODP shm term failed.\n"); - rc = -1; - } - /* Fall through */ - case ISHM_INIT: if (_odp_ishm_term_global()) { ODP_ERR("ODP ishm term failed.\n"); @@ -343,12 +330,6 @@ int odp_init_local(odp_instance_t instance, odp_thread_type_t thr_type) } stage = ISHM_INIT;
- if (odp_shm_init_local()) { - ODP_ERR("ODP shm local init failed.\n"); - goto init_fail; - } - stage = SHM_INIT; - if (odp_thread_init_local(thr_type)) { ODP_ERR("ODP thread local init failed.\n"); goto init_fail; diff --git a/platform/linux-generic/odp_shared_memory.c b/platform/linux-generic/odp_shared_memory.c index 550af27..9e916e9 100644 --- a/platform/linux-generic/odp_shared_memory.c +++ b/platform/linux-generic/odp_shared_memory.c @@ -4,434 +4,88 @@ * SPDX-License-Identifier: BSD-3-Clause */
-#include <odp_posix_extensions.h> - -#include <odp/api/shared_memory.h> -#include <odp_internal.h> -#include <odp/api/spinlock.h> -#include <odp/api/align.h> -#include <odp/api/system_info.h> -#include <odp/api/debug.h> -#include <odp_shm_internal.h> -#include <odp_debug_internal.h> -#include <odp_align_internal.h> #include <odp_config_internal.h> - -#include <unistd.h> -#include <sys/mman.h> -#include <sys/stat.h> -#include <asm/mman.h> -#include <fcntl.h> - -#include <stdio.h> +#include <odp/api/debug.h> +#include <odp/api/std_types.h> +#include <odp/api/shared_memory.h> +#include <_ishm_internal.h> #include <string.h> -#include <errno.h> -#include <inttypes.h>
ODP_STATIC_ASSERT(ODP_CONFIG_SHM_BLOCKS >= ODP_CONFIG_POOLS, "ODP_CONFIG_SHM_BLOCKS < ODP_CONFIG_POOLS");
-typedef struct { - char name[ODP_SHM_NAME_LEN]; - uint64_t size; - uint64_t align; - uint64_t alloc_size; - void *addr_orig; - void *addr; - int huge; - odp_shm_t hdl; - uint32_t flags; - uint64_t page_sz; - int fd; - -} odp_shm_block_t; - - -typedef struct { - odp_shm_block_t block[ODP_CONFIG_SHM_BLOCKS]; - odp_spinlock_t lock; - -} odp_shm_table_t; - - -#ifndef MAP_ANONYMOUS -#define MAP_ANONYMOUS MAP_ANON -#endif - - -/* Global shared memory table */ -static odp_shm_table_t *odp_shm_tbl; - - static inline uint32_t from_handle(odp_shm_t shm) { return _odp_typeval(shm) - 1; }
- static inline odp_shm_t to_handle(uint32_t index) { return _odp_cast_scalar(odp_shm_t, index + 1); }
- -int odp_shm_init_global(void) -{ - void *addr; - -#ifndef MAP_HUGETLB - ODP_DBG("NOTE: mmap does not support huge pages\n"); -#endif - - addr = mmap(NULL, sizeof(odp_shm_table_t), - PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); - - if (addr == MAP_FAILED) - return -1; - - odp_shm_tbl = addr; - - memset(odp_shm_tbl, 0, sizeof(odp_shm_table_t)); - odp_spinlock_init(&odp_shm_tbl->lock); - - return 0; -} - -int odp_shm_term_global(void) -{ - int ret; - - ret = munmap(odp_shm_tbl, sizeof(odp_shm_table_t)); - if (ret) - ODP_ERR("unable to munmap\n."); - - return ret; -} - - -int odp_shm_init_local(void) -{ - return 0; -} - int odp_shm_capability(odp_shm_capability_t *capa) { memset(capa, 0, sizeof(odp_shm_capability_t));
capa->max_blocks = ODP_CONFIG_SHM_BLOCKS; - capa->max_size = 0; - capa->max_align = 0; + capa->max_size = 0; + capa->max_align = 0;
return 0; }
-static int find_block(const char *name, uint32_t *index) -{ - uint32_t i; - - for (i = 0; i < ODP_CONFIG_SHM_BLOCKS; i++) { - if (strcmp(name, odp_shm_tbl->block[i].name) == 0) { - /* found it */ - if (index != NULL) - *index = i; - - return 1; - } - } - - return 0; -} - -int odp_shm_free(odp_shm_t shm) -{ - uint32_t i; - int ret; - odp_shm_block_t *block; - char shm_devname[SHM_DEVNAME_MAXLEN]; - - if (shm == ODP_SHM_INVALID) { - ODP_DBG("odp_shm_free: Invalid handle\n"); - return -1; - } - - i = from_handle(shm); - - if (i >= ODP_CONFIG_SHM_BLOCKS) { - ODP_DBG("odp_shm_free: Bad handle\n"); - return -1; - } - - odp_spinlock_lock(&odp_shm_tbl->lock); - - block = &odp_shm_tbl->block[i]; - - if (block->addr == NULL) { - ODP_DBG("odp_shm_free: Free block\n"); - odp_spinlock_unlock(&odp_shm_tbl->lock); - return 0; - } - - ret = munmap(block->addr_orig, block->alloc_size); - if (0 != ret) { - ODP_DBG("odp_shm_free: munmap failed: %s, id %u, addr %p\n", - strerror(errno), i, block->addr_orig); - odp_spinlock_unlock(&odp_shm_tbl->lock); - return -1; - } - - if (block->flags & ODP_SHM_PROC || block->flags & _ODP_SHM_PROC_NOCREAT) { - int shm_ns_id; - - if (odp_global_data.ipc_ns) - shm_ns_id = odp_global_data.ipc_ns; - else - shm_ns_id = odp_global_data.main_pid; - - snprintf(shm_devname, SHM_DEVNAME_MAXLEN, - SHM_DEVNAME_FORMAT, shm_ns_id, block->name); - ret = shm_unlink(shm_devname); - if (0 != ret) { - ODP_DBG("odp_shm_free: shm_unlink failed\n"); - odp_spinlock_unlock(&odp_shm_tbl->lock); - return -1; - } - } - memset(block, 0, sizeof(odp_shm_block_t)); - odp_spinlock_unlock(&odp_shm_tbl->lock); - return 0; -} - odp_shm_t odp_shm_reserve(const char *name, uint64_t size, uint64_t align, uint32_t flags) { - uint32_t i; - char shm_devname[SHM_DEVNAME_MAXLEN]; - odp_shm_block_t *block; - void *addr; - int fd = -1; - int map_flag = MAP_SHARED; - /* If already exists: O_EXCL: error, O_TRUNC: truncate to zero */ - int oflag = O_RDWR; - uint64_t alloc_size; - uint64_t page_sz, huge_sz; -#ifdef MAP_HUGETLB - int need_huge_page = 0; - uint64_t alloc_hp_size; -#endif - - page_sz = odp_sys_page_size(); - alloc_size = size + align; + int block_index; + int flgs = 0; /* internal ishm flags */
-#ifdef MAP_HUGETLB - huge_sz = odp_sys_huge_page_size(); - need_huge_page = (huge_sz && alloc_size > page_sz); - /* munmap for huge pages requires sizes round up by page */ - alloc_hp_size = (size + align + (huge_sz - 1)) & (-huge_sz); -#endif + /* set internal ishm flags according to API flags: */ + flgs |= (flags & ODP_SHM_PROC) ? _ODP_ISHM_EXPORT : 0;
- if (flags & ODP_SHM_PROC) - oflag |= O_CREAT | O_TRUNC; - if (flags & _ODP_SHM_O_EXCL) - oflag |= O_EXCL; + /* all mem reserved through this interface is requested to be locked: */ + flgs |= (flags & _ODP_ISHM_LOCK);
- if (flags & (ODP_SHM_PROC | _ODP_SHM_PROC_NOCREAT)) { - int shm_ns_id; - - if (odp_global_data.ipc_ns) - shm_ns_id = odp_global_data.ipc_ns; - else - shm_ns_id = odp_global_data.main_pid; - - need_huge_page = 0; - - /* Creates a file to /dev/shm/odp */ - snprintf(shm_devname, SHM_DEVNAME_MAXLEN, - SHM_DEVNAME_FORMAT, shm_ns_id, name); - fd = shm_open(shm_devname, oflag, - S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); - if (fd == -1) { - ODP_DBG("%s: shm_open failed.\n", shm_devname); - return ODP_SHM_INVALID; - } - } else { - map_flag |= MAP_ANONYMOUS; - } - - odp_spinlock_lock(&odp_shm_tbl->lock); - - if (find_block(name, NULL)) { - /* Found a block with the same name */ - odp_spinlock_unlock(&odp_shm_tbl->lock); - ODP_DBG("name "%s" already used.\n", name); + block_index = _odp_ishm_reserve(name, size, -1, align, flgs, flags); + if (block_index >= 0) + return to_handle(block_index); + else return ODP_SHM_INVALID; - } - - for (i = 0; i < ODP_CONFIG_SHM_BLOCKS; i++) { - if (odp_shm_tbl->block[i].addr == NULL) { - /* Found free block */ - break; - } - } - - if (i > ODP_CONFIG_SHM_BLOCKS - 1) { - /* Table full */ - odp_spinlock_unlock(&odp_shm_tbl->lock); - ODP_DBG("%s: no more blocks.\n", name); - return ODP_SHM_INVALID; - } - - block = &odp_shm_tbl->block[i]; - - block->hdl = to_handle(i); - addr = MAP_FAILED; - -#ifdef MAP_HUGETLB - /* Try first huge pages */ - if (need_huge_page) { - if ((flags & ODP_SHM_PROC) && - (ftruncate(fd, alloc_hp_size) == -1)) { - odp_spinlock_unlock(&odp_shm_tbl->lock); - ODP_DBG("%s: ftruncate huge pages failed.\n", name); - return ODP_SHM_INVALID; - } - - addr = mmap(NULL, alloc_hp_size, PROT_READ | PROT_WRITE, - map_flag | MAP_HUGETLB, fd, 0); - if (addr == MAP_FAILED) { - ODP_DBG(" %s:\n" - "\tNo huge pages, fall back to normal pages,\n" - "\tcheck: /proc/sys/vm/nr_hugepages.\n", name); - } else { - block->alloc_size = alloc_hp_size; - block->huge = 1; - block->page_sz = huge_sz; - } - } -#endif - - /* Use normal pages for small or failed huge page allocations */ - if (addr == MAP_FAILED) { - if ((flags & ODP_SHM_PROC) && - (ftruncate(fd, alloc_size) == -1)) { - odp_spinlock_unlock(&odp_shm_tbl->lock); - ODP_ERR("%s: ftruncate failed.\n", name); - return ODP_SHM_INVALID; - } - - addr = mmap(NULL, alloc_size, PROT_READ | PROT_WRITE, - map_flag, fd, 0); - if (addr == MAP_FAILED) { - odp_spinlock_unlock(&odp_shm_tbl->lock); - ODP_DBG("%s mmap failed.\n", name); - return ODP_SHM_INVALID; - } else { - block->alloc_size = alloc_size; - block->huge = 0; - block->page_sz = page_sz; - } - } - - block->addr_orig = addr; - - /* move to correct alignment */ - addr = ODP_ALIGN_ROUNDUP_PTR(addr, align); - - strncpy(block->name, name, ODP_SHM_NAME_LEN - 1); - block->name[ODP_SHM_NAME_LEN - 1] = 0; - block->size = size; - block->align = align; - block->flags = flags; - block->fd = fd; - block->addr = addr; +}
- odp_spinlock_unlock(&odp_shm_tbl->lock); - return block->hdl; +int odp_shm_free(odp_shm_t shm) +{ + return _odp_ishm_free_by_index(from_handle(shm)); }
odp_shm_t odp_shm_lookup(const char *name) { - uint32_t i; - odp_shm_t hdl; - - odp_spinlock_lock(&odp_shm_tbl->lock); - - if (find_block(name, &i) == 0) { - odp_spinlock_unlock(&odp_shm_tbl->lock); - return ODP_SHM_INVALID; - } - - hdl = odp_shm_tbl->block[i].hdl; - odp_spinlock_unlock(&odp_shm_tbl->lock); - - return hdl; + return to_handle(_odp_ishm_lookup_by_name(name)); }
- void *odp_shm_addr(odp_shm_t shm) { - uint32_t i; - - i = from_handle(shm); - - if (i > (ODP_CONFIG_SHM_BLOCKS - 1)) - return NULL; - - return odp_shm_tbl->block[i].addr; + return _odp_ishm_address(from_handle(shm)); }
- int odp_shm_info(odp_shm_t shm, odp_shm_info_t *info) { - odp_shm_block_t *block; - uint32_t i; + _odp_ishm_info_t ishm_info;
- i = from_handle(shm); - - if (i > (ODP_CONFIG_SHM_BLOCKS - 1)) + if (_odp_ishm_info(from_handle(shm), &ishm_info)) return -1;
- block = &odp_shm_tbl->block[i]; - - info->name = block->name; - info->addr = block->addr; - info->size = block->size; - info->page_size = block->page_sz; - info->flags = block->flags; + info->name = ishm_info.name; + info->addr = ishm_info.addr; + info->size = ishm_info.size; + info->page_size = ishm_info.page_size; + info->flags = ishm_info.user_flags;
return 0; }
- void odp_shm_print_all(void) { - int i; - - ODP_PRINT("\nShared memory\n"); - ODP_PRINT("--------------\n"); - ODP_PRINT(" page size: %"PRIu64" kB\n", - odp_sys_page_size() / 1024); - ODP_PRINT(" huge page size: %"PRIu64" kB\n", - odp_sys_huge_page_size() / 1024); - ODP_PRINT("\n"); - - ODP_PRINT(" id name kB align huge addr\n"); - - for (i = 0; i < ODP_CONFIG_SHM_BLOCKS; i++) { - odp_shm_block_t *block; - - block = &odp_shm_tbl->block[i]; - - if (block->addr) { - ODP_PRINT(" %2i %-24s %4"PRIu64" %4"PRIu64 - " %2c %p\n", - i, - block->name, - block->size/1024, - block->align, - (block->huge ? '*' : ' '), - block->addr); - } - } - - ODP_PRINT("\n"); + _odp_ishm_status("Memory allocation status:"); } diff --git a/test/linux-generic/validation/api/shmem/shmem_linux.c b/test/linux-generic/validation/api/shmem/shmem_linux.c index 212a6c1..9ab0e2b 100644 --- a/test/linux-generic/validation/api/shmem/shmem_linux.c +++ b/test/linux-generic/validation/api/shmem/shmem_linux.c @@ -45,12 +45,69 @@ #include <sys/mman.h> #include <libgen.h> #include <linux/limits.h> +#include <inttypes.h> #include "shmem_linux.h" #include "shmem_common.h"
-#define ODP_APP_NAME "shmem_odp" /* name of the odp program, in this dir */ -#define DEVNAME_FMT "odp-%d-%s" /* shm device format: odp-<pid>-<name> */ -#define MAX_FIFO_WAIT 30 /* Max time waiting for the fifo (sec) */ +#define ODP_APP_NAME "shmem_odp" /* name of the odp program, in this dir */ +#define DEVNAME_FMT "/tmp/odp-%" PRIu64 "-shm-%s" /* odp-<pid>-shm-<name> */ +#define MAX_FIFO_WAIT 30 /* Max time waiting for the fifo (sec) */ + +/* + * read the attributes of a externaly shared mem object: + * input: ext_odp_pid, blockname: the remote ODP instance and the exported + * block name to be searched. + * Output: filename: the memory block underlaying file to be opened + * (the given buffer should be big enough i.e. at + * least ISHM_FILENAME_MAXLEN bytes) + * The 3 following parameters are really here for debug + * as they are really meaningles in a non-odp process: + * len: the block real length (bytes, multiple of page sz) + * flags: the _ishm flags setting the block was created with + * align: the alignement setting the block was created with + * + * return 0 on success, non zero on error + */ +static int read_shmem_attribues(uint64_t ext_odp_pid, const char *blockname, + char *filename, uint64_t *len, + uint32_t *flags, uint32_t *align) +{ + char shm_attr_filename[PATH_MAX]; + FILE *export_file; + + sprintf(shm_attr_filename, DEVNAME_FMT, ext_odp_pid, blockname); + + /* O_CREAT flag not given => failure if shm_attr_filename does not + * already exist */ + export_file = fopen(shm_attr_filename, "r"); + if (export_file == NULL) + return -1; + + if (fscanf(export_file, "ODP exported shm block info: ") != 0) + goto export_file_read_err; + + if (fscanf(export_file, "ishm_blockname: %*s ") != 0) + goto export_file_read_err; + + if (fscanf(export_file, "file: %s ", filename) != 1) + goto export_file_read_err; + + if (fscanf(export_file, "length: %" PRIu64 " ", len) != 1) + goto export_file_read_err; + + if (fscanf(export_file, "flags: %" PRIu32 " ", flags) != 1) + goto export_file_read_err; + + if (fscanf(export_file, "align: %" PRIu32 " ", align) != 1) + goto export_file_read_err; + + fclose(export_file); + return 0; + +export_file_read_err: + fclose(export_file); + return -1; +}
void test_success(char *fifo_name, int fd, pid_t odp_app) { @@ -91,12 +148,15 @@ int main(int argc __attribute__((unused)), char *argv[]) char prg_name[PATH_MAX]; char odp_name[PATH_MAX]; int nb_sec; - int size; + uint64_t size; pid_t odp_app; char *odp_params = NULL; char fifo_name[PATH_MAX]; /* fifo for linux->odp feedback */ int fifo_fd = -1; - char shm_devname[PATH_MAX];/* shared mem device name, under /dev/shm */ + char shm_devname[PATH_MAX];/* shared mem device name.*/ + uint64_t len; + uint32_t flags; + uint32_t align; int shm_fd; test_shared_linux_data_t *addr;
@@ -130,26 +190,28 @@ int main(int argc __attribute__((unused)), char *argv[]) * ODP application is up and running, and has allocated shmem. * check to see if linux can see the created shared memory: */
- sprintf(shm_devname, DEVNAME_FMT, odp_app, ODP_SHM_NAME); + /* read the shared memory attributes (includes the shm filename): */ + if (read_shmem_attribues(odp_app, ODP_SHM_NAME, + shm_devname, &len, &flags, &align) != 0) + test_failure(fifo_name, fifo_fd, odp_app);
- /* O_CREAT flag not given => failure if shm_devname does not already + /* open the shm filename (which is either on /tmp or on hugetlbfs) + * O_CREAT flag not given => failure if shm_devname does not already * exist */ - shm_fd = shm_open(shm_devname, O_RDONLY, - S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); + shm_fd = open(shm_devname, O_RDONLY, + S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); if (shm_fd == -1) - test_failure(fifo_name, shm_fd, odp_app); + test_failure(fifo_name, fifo_fd, odp_app); + + /* linux ODP guarantees page size alignement. Larger alignment may + * fail as 2 different processes will have fully unrelated + * virtual spaces. + */ + size = sizeof(test_shared_linux_data_t);
- /* we know that the linux generic ODP actually allocates the required - * size + alignment and aligns the returned address after. - * we must do the same here: */ - size = sizeof(test_shared_linux_data_t) + ALIGN_SIZE; addr = mmap(NULL, size, PROT_READ, MAP_SHARED, shm_fd, 0); if (addr == MAP_FAILED) - test_failure(fifo_name, shm_fd, odp_app); - - /* perform manual alignment */ - addr = (test_shared_linux_data_t *)((((unsigned long int)addr + - ALIGN_SIZE - 1) / ALIGN_SIZE) * ALIGN_SIZE); + test_failure(fifo_name, fifo_fd, odp_app);
/* check that we see what the ODP application wrote in the memory */ if ((addr->foo == TEST_SHARE_FOO) && (addr->bar == TEST_SHARE_BAR))
commit 4b698023210b7f742c053707ba131097b570276d Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:21 2016 +0100
linux-gen: _ishm: create description file for external memory sharing
A new flag called _ODP_ISHM_EXPORT is added to _ishm. When this flag is specified at reserve() time, an extra file ("/tmp/odp-<pid>-shm-<blockname>", where <pid> is the process ID of the main ODP instatiation process and <blockname> is the block name given at reserve time) is created, describing to the underlying block attributes. This file is meant to be used by processes external to ODP willing to share this memory.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c index c8156aa..6ceda80 100644 --- a/platform/linux-generic/_ishm.c +++ b/platform/linux-generic/_ishm.c @@ -99,6 +99,14 @@ #define ISHM_FILENAME_NORMAL_PAGE_DIR "/tmp"
/* + * when the memory is to be shared with an external entity (such as another + * ODP instance or an OS process not part of this ODP instance) then a + * export file is created describing the exported memory: this defines the + * location and the filename format of this description file + */ +#define ISHM_EXPTNAME_FORMAT "/tmp/odp-%d-shm-%s" + +/* * At worse case the virtual space gets so fragmented that there is * a unallocated fragment between each allocated fragment: * In that case, the number of fragments to take care of is twice the @@ -107,6 +115,17 @@ #define ISHM_NB_FRAGMNTS (ISHM_MAX_NB_BLOCKS * 2 + 1)
/* + * when a memory block is to be exported outside its ODP instance, + * an block 'attribute file' is created in /tmp/odp-<pid>-shm-<name>. + * The information given in this file is according to the following: + */ +#define EXPORT_FILE_LINE1_FMT "ODP exported shm block info:" +#define EXPORT_FILE_LINE2_FMT "ishm_blockname: %s" +#define EXPORT_FILE_LINE3_FMT "file: %s" +#define EXPORT_FILE_LINE4_FMT "length: %" PRIu64 +#define EXPORT_FILE_LINE5_FMT "flags: %" PRIu32 +#define EXPORT_FILE_LINE6_FMT "align: %" PRIu32 +/* * A fragment describes a piece of the shared virtual address space, * and is allocated only when allocation is done with the _ODP_ISHM_SINGLE_VA * flag: @@ -136,6 +155,7 @@ typedef struct ishm_fragment { typedef struct ishm_block { char name[ISHM_NAME_MAXLEN]; /* name for the ishm block (if any) */ char filename[ISHM_FILENAME_MAXLEN]; /* name of the .../odp-* file */ + char exptname[ISHM_FILENAME_MAXLEN]; /* name of the export file */ int main_odpthread; /* The thread which did the initial reserve*/ uint32_t user_flags; /* any flags the user want to remember. */ uint32_t flags; /* block creation flags. */ @@ -380,7 +400,8 @@ static void free_fragment(ishm_fragment_t *fragmnt) * or /mnt/huge/odp-<pid>-<sequence_or_name> (for huge pages) * Return the new file descriptor, or -1 on error. */ -static int create_file(int block_index, int huge, uint64_t len) +static int create_file(int block_index, int huge, uint64_t len, + uint32_t flags, uint32_t align) { char *name; int fd; @@ -388,6 +409,7 @@ static int create_file(int block_index, int huge, uint64_t len) char seq_string[ISHM_FILENAME_MAXLEN]; /* used to construct filename*/ char filename[ISHM_FILENAME_MAXLEN];/* filename in /tmp/ or /mnt/huge */ int oflag = O_RDWR | O_CREAT | O_TRUNC; /* flags for open */ + FILE *export_file;
new_block = &ishm_tbl->block[block_index]; name = new_block->name; @@ -429,9 +451,48 @@ static int create_file(int block_index, int huge, uint64_t len)
strncpy(new_block->filename, filename, ISHM_FILENAME_MAXLEN - 1);
+ /* if _ODP_ISHM_EXPORT is set, create a description file for + * external ref: + */ + if (flags & _ODP_ISHM_EXPORT) { + snprintf(new_block->exptname, ISHM_FILENAME_MAXLEN, + ISHM_EXPTNAME_FORMAT, + odp_global_data.main_pid, + (name && name[0]) ? name : seq_string); + export_file = fopen(new_block->exptname, "w"); + if (export_file == NULL) { + ODP_ERR("open failed: err=%s.\n", + strerror(errno)); + new_block->exptname[0] = 0; + } else { + fprintf(export_file, EXPORT_FILE_LINE1_FMT "\n"); + fprintf(export_file, EXPORT_FILE_LINE2_FMT "\n", name); + fprintf(export_file, EXPORT_FILE_LINE3_FMT "\n", + new_block->filename); + fprintf(export_file, EXPORT_FILE_LINE4_FMT "\n", len); + fprintf(export_file, EXPORT_FILE_LINE5_FMT "\n", flags); + fprintf(export_file, EXPORT_FILE_LINE6_FMT "\n", align); + + fclose(export_file); + } + } else { + new_block->exptname[0] = 0; + } + return fd; }
+/* delete the files related to a given ishm block: */ +static void delete_file(ishm_block_t *block) +{ + /* remove the .../odp-* file, unless fd was external: */ + if (block->filename[0] != 0) + unlink(block->filename); + /* also remove possible description file (if block was exported): */ + if (block->exptname[0] != 0) + unlink(block->exptname); +} + /* * performs the mapping, possibly allocating a fragment of the pre-reserved * VA space if the _ODP_ISHM_SINGLE_VA flag was given. @@ -456,7 +517,7 @@ static void *do_map(int block_index, uint64_t len, uint32_t align, * unless a fd was already given */ if (*fd < 0) { - *fd = create_file(block_index, huge, len); + *fd = create_file(block_index, huge, len, flags, align); if (*fd < 0) return NULL; } else { @@ -471,7 +532,7 @@ static void *do_map(int block_index, uint64_t len, uint32_t align, if (new_block->filename[0]) { close(*fd); *fd = -1; - unlink(new_block->filename); + delete_file(new_block); } return NULL; } @@ -486,7 +547,7 @@ static void *do_map(int block_index, uint64_t len, uint32_t align, if (new_block->filename[0]) { close(*fd); *fd = -1; - unlink(new_block->filename); + delete_file(new_block); } return NULL; } @@ -867,9 +928,8 @@ static int block_free(int block_index) do_unmap(NULL, 0, block->flags, block_index); }
- /* remove the .../odp-* file, unless fd was external: */ - if (block->filename[0] != 0) - unlink(block->filename); + /* remove all files related to this block: */ + delete_file(block);
/* deregister the file descriptor from the file descriptor server. */ _odp_fdserver_deregister_fd(FD_SRV_CTX_ISHM, block_index); diff --git a/platform/linux-generic/include/_ishm_internal.h b/platform/linux-generic/include/_ishm_internal.h index 7d27477..d348b41 100644 --- a/platform/linux-generic/include/_ishm_internal.h +++ b/platform/linux-generic/include/_ishm_internal.h @@ -14,6 +14,7 @@ extern "C" { /* flags available at ishm_reserve: */ #define _ODP_ISHM_SINGLE_VA 1 #define _ODP_ISHM_LOCK 2 +#define _ODP_ISHM_EXPORT 4 /*create export descr file in /tmp */
/** * Shared memory block info
commit 2eb3e87bc56b2a02cb10637e5ce3a7d1157472cf Author: Christophe Milard christophe.milard@linaro.org Date: Sat Aug 20 09:45:58 2016 +0200
linux-gen: ishm: internal shared memory allocator (ishm) added
A new ODP internal memory allocator, called ishm (for internal shmem) is introduced here. This memory allocator enables the following: - works for odpthreads being linux processes, regardless for fork time. - guarantees the uniqueness of the virtual space mapping address over all ODP threads (even processes and regardless of fork time), when the required _ODP_ISHM_SINGLE_VA flag is used.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Brian Brooks brian.brooks@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 434e530..0bc9842 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -105,6 +105,8 @@ odpdrvinclude_HEADERS = \
noinst_HEADERS = \ ${srcdir}/include/_fdserver_internal.h \ + ${srcdir}/include/_ishm_internal.h \ + ${srcdir}/include/_ishmphy_internal.h \ ${srcdir}/include/odp_align_internal.h \ ${srcdir}/include/odp_atomic_internal.h \ ${srcdir}/include/odp_buffer_inlines.h \ @@ -147,6 +149,8 @@ noinst_HEADERS = \
__LIB__libodp_linux_la_SOURCES = \ _fdserver.c \ + _ishm.c \ + _ishmphy.c \ odp_atomic.c \ odp_barrier.c \ odp_buffer.c \ diff --git a/platform/linux-generic/_ishm.c b/platform/linux-generic/_ishm.c new file mode 100644 index 0000000..c8156aa --- /dev/null +++ b/platform/linux-generic/_ishm.c @@ -0,0 +1,1335 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/* This file handles the internal shared memory: internal shared memory + * is memory which is sharable by all ODP threads regardless of how the + * ODP thread is implemented (pthread or process) and regardless of fork() + * time. + * Moreover, when reserved with the _ODP_ISHM_SINGLE_VA flag, + * internal shared memory is guaranteed to always be located at the same virtual + * address, i.e. pointers to internal shared memory are fully shareable + * between odp threads (regardless of thread type or fork time) in that case. + * Internal shared memory is mainly meant to be used internaly within ODP + * (hence its name), but may also be allocated by odp applications and drivers, + * in the future (through these interfaces). + * To guarrentee this full pointer shareability (when reserved with the + * _ODP_ISHM_SINGLE_VA flag) internal shared memory is handled as follows: + * At global_init time, a huge virtual address space reservation is performed. + * Note that this is just reserving virtual space, not physical memory. + * Because all ODP threads (pthreads or processes) are descendants of the ODP + * instantiation process, this VA space is inherited by all ODP threads. + * When internal shmem reservation actually occurs, and + * when reserved with the _ODP_ISHM_SINGLE_VA flag, physical memory is + * allocated, and mapped (MAP_FIXED) to some part in the huge preallocated + * address space area: + * because this virtual address space is common to all ODP threads, we + * know this mapping will succeed, and not clash with anything else. + * Hence, an ODP threads which perform a lookup for the same ishm block + * can map it at the same VA address. + * When internal shared memory is released, the physical memory is released + * and the corresponding virtual space returned to its "pool" of preallocated + * virtual space (assuming it was allocated from there). + * Note, though, that, if 2 linux processes share the same ishm block, + * the virtual space is marked as released as soon as one of the processes + * releases the ishm block, but the physical memory space is actually released + * by the kernel once all processes have done a ishm operation (i,e. a sync). + * This is due to the fact that linux does not contain any syscall to unmap + * memory from a different process. + * + * This file contains functions to handle the VA area (handling fragmentation + * and defragmentation resulting from different allocs/release) and also + * define the functions to allocate, release and lookup internal shared + * memory: + * _odp_ishm_reserve(), _odp_ishm_free*() and _odp_ishm_lookup*()... + */ +#include <odp_posix_extensions.h> +#include <odp_config_internal.h> +#include <odp_internal.h> +#include <odp/api/spinlock.h> +#include <odp/api/align.h> +#include <odp/api/system_info.h> +#include <odp/api/debug.h> +#include <odp_shm_internal.h> +#include <odp_debug_internal.h> +#include <odp_align_internal.h> +#include <_fdserver_internal.h> +#include <_ishm_internal.h> +#include <_ishmphy_internal.h> +#include <stdlib.h> +#include <stdio.h> +#include <unistd.h> +#include <string.h> +#include <errno.h> +#include <sys/mman.h> +#include <sys/stat.h> +#include <sys/syscall.h> +#include <fcntl.h> +#include <sys/types.h> +#include <inttypes.h> +#include <sys/wait.h> + +/* + * Maximum number of internal shared memory blocks. + * + * This the the number of separate ISHM areas that can be reserved concurrently + * (Note that freeing such blocks may take time, or possibly never happen + * if some of the block ownwers never procsync() after free). This number + * should take that into account) + */ +#define ISHM_MAX_NB_BLOCKS 128 + +/* + * Maximum internal shared memory block name length in chars + * probably taking the same number as SHM name size make sense at this stage + */ +#define ISHM_NAME_MAXLEN 32 + +/* + * Linux underlying file name: <directory>/odp-<odp_pid>-ishm-<name> + * The <name> part may be replaced by a sequence number if no specific + * name is given at reserve time + * <directory> is either /tmp or the hugepagefs mount point for default size. + * (searched at init time) + */ +#define ISHM_FILENAME_MAXLEN (ISHM_NAME_MAXLEN + 64) +#define ISHM_FILENAME_FORMAT "%s/odp-%d-ishm-%s" +#define ISHM_FILENAME_NORMAL_PAGE_DIR "/tmp" + +/* + * At worse case the virtual space gets so fragmented that there is + * a unallocated fragment between each allocated fragment: + * In that case, the number of fragments to take care of is twice the + * number of ISHM blocks + 1. + */ +#define ISHM_NB_FRAGMNTS (ISHM_MAX_NB_BLOCKS * 2 + 1) + +/* + * A fragment describes a piece of the shared virtual address space, + * and is allocated only when allocation is done with the _ODP_ISHM_SINGLE_VA + * flag: + * A fragment is said to be used when it actually does represent some + * portion of the virtual address space, and is said to be unused when + * it does not (so at start, one single fragment is used -describing the + * whole address space as unallocated-, and all others are unused). + * Fragments get used as address space fragmentation increases. + * A fragment is allocated if the piece of address space it + * describes is actually used by a shared memory block. + * Allocated fragments get their block_index set >=0. + */ +typedef struct ishm_fragment { + struct ishm_fragment *prev; /* not used when the fragment is unused */ + struct ishm_fragment *next; + void *start; /* start of segment (VA) */ + uintptr_t len; /* length of segment. multiple of page size */ + int block_index; /* -1 for unallocated fragments */ +} ishm_fragment_t; + +/* + * A block describes a piece of reserved memory: Any successful ishm_reserve() + * will allocate a block. A ishm_reserve() with the _ODP_ISHM_SINGLE_VA flag set + * will allocate both a block and a fragment. + * Blocks contain only global data common to all processes. + */ +typedef struct ishm_block { + char name[ISHM_NAME_MAXLEN]; /* name for the ishm block (if any) */ + char filename[ISHM_FILENAME_MAXLEN]; /* name of the .../odp-* file */ + int main_odpthread; /* The thread which did the initial reserve*/ + uint32_t user_flags; /* any flags the user want to remember. */ + uint32_t flags; /* block creation flags. */ + uint64_t user_len; /* length, as requested at reserve time. */ + void *start; /* only valid if _ODP_ISHM_SINGLE_VA is set*/ + uint64_t len; /* length. multiple of page size. 0 if free*/ + ishm_fragment_t *fragment; /* used when _ODP_ISHM_SINGLE_VA is used */ + int huge; /* true if this segment is mapped using huge pages */ + uint64_t seq; /* sequence number, incremented on alloc and free */ + uint64_t refcnt;/* number of linux processes mapping this block */ +} ishm_block_t; + +/* + * Table of blocks describing allocated internal shared memory + * This table is visible to every ODP thread (linux process or pthreads). + * (it is allocated shared at odp init time and is therefore inherited by all) + * Table index is used as handle, so it cannot move!. Entry is regarded as + * free when len==0 + */ +typedef struct { + odp_spinlock_t lock; + uint64_t dev_seq; /* used when creating device names */ + ishm_block_t block[ISHM_MAX_NB_BLOCKS]; +} ishm_table_t; +static ishm_table_t *ishm_tbl; + +/* + * Process local table containing the list of (believed) allocated blocks seen + * from the current process. There is one such table per linux process. linux + * threads within a process shares this table. + * The contents within this table may become obsolete when other processes + * reserve/free ishm blocks. This is what the procsync() function + * catches by comparing the block sequence number with the one in this table. + * This table is filled at ishm_reserve and ishm_lookup time. + * Entries are removed at ishm_free or procsync time. + * Note that flags and len are present in this table and seems to be redundant + * with those present in the ishm block table: but this is not fully true: + * When ishm_sync() detects obsolete mappings and tries to remove them, + * the entry in the ishm block table is then obsolete, and the values which are + * found in this table must be used to perform the ummap. + * (and the values in the block tables are needed at lookup time...) + */ +typedef struct { + int thrd_refcnt; /* number of pthreads in this process, really */ + struct { + int block_index; /* entry in the ishm_tbl */ + uint32_t flags; /* flags used at creation time */ + uint64_t seq; + void *start; /* start of block (VA) */ + uint64_t len; /* length of block. multiple of page size */ + int fd; /* file descriptor used for this block */ + } entry[ISHM_MAX_NB_BLOCKS]; + int nb_entries; +} ishm_proctable_t; +static ishm_proctable_t *ishm_proctable; + +/* + * Table of fragments describing the common virtual address space: + * This table is visible to every ODP thread (linux process or pthreads). + * (it is allocated at odp init time and is therefore inherited by all) + */ +typedef struct { + ishm_fragment_t fragment[ISHM_NB_FRAGMNTS]; + ishm_fragment_t *used_fragmnts; /* ordered by increasing start addr */ + ishm_fragment_t *unused_fragmnts; +} ishm_ftable_t; +static ishm_ftable_t *ishm_ftbl; + +#ifndef MAP_ANONYMOUS +#define MAP_ANONYMOUS MAP_ANON +#endif + +/* prototypes: */ +static void procsync(void); + +/* + * Take a piece of the preallocated virtual space to fit "size" bytes. + * (best fit). Size must be rounded up to an integer number of pages size. + * Possibly split the fragment to keep track of remaining space. + * Returns the allocated fragment (best_fragmnt) and the corresponding address. + * External caller must ensure mutex before the call! + */ +static void *alloc_fragment(uintptr_t size, int block_index, intptr_t align, + ishm_fragment_t **best_fragmnt) +{ + ishm_fragment_t *fragmnt; + *best_fragmnt = NULL; + ishm_fragment_t *rem_fragmnt; + uintptr_t border;/* possible start of new fragment (next alignement) */ + intptr_t left; /* room remaining after, if the segment is allocated */ + uintptr_t remainder = ODP_CONFIG_ISHM_VA_PREALLOC_SZ; + + /* + * search for the best bit, i.e. search for the unallocated fragment + * would give less remainder if the new fragment was allocated within + * it: + */ + for (fragmnt = ishm_ftbl->used_fragmnts; + fragmnt; fragmnt = fragmnt->next) { + /* skip allocated segment: */ + if (fragmnt->block_index >= 0) + continue; + /* skip too short segment: */ + border = ((uintptr_t)fragmnt->start + align - 1) & (-align); + left = + ((uintptr_t)fragmnt->start + fragmnt->len) - (border + size); + if (left < 0) + continue; + /* remember best fit: */ + if ((uintptr_t)left < remainder) { + remainder = left; /* best, so far */ + *best_fragmnt = fragmnt; + } + } + + if (!(*best_fragmnt)) { + ODP_ERR("unable to get virtual address for shmem block!\n."); + return NULL; + } + + (*best_fragmnt)->block_index = block_index; + border = ((uintptr_t)(*best_fragmnt)->start + align - 1) & (-align); + + /* + * if there is room between previous fragment and new one, (due to + * alignement requirement) then fragment (split) the space between + * the end of the previous fragment and the beginning of the new one: + */ + if (border - (uintptr_t)(*best_fragmnt)->start > 0) { + /* frangment space, i.e. take a new fragment descriptor... */ + rem_fragmnt = ishm_ftbl->unused_fragmnts; + if (!rem_fragmnt) { + ODP_ERR("unable to get shmem fragment descriptor!\n."); + return NULL; + } + ishm_ftbl->unused_fragmnts = rem_fragmnt->next; + + /* and link it between best_fragmnt->prev and best_fragmnt */ + if ((*best_fragmnt)->prev) + (*best_fragmnt)->prev->next = rem_fragmnt; + else + ishm_ftbl->used_fragmnts = rem_fragmnt; + rem_fragmnt->prev = (*best_fragmnt)->prev; + (*best_fragmnt)->prev = rem_fragmnt; + rem_fragmnt->next = (*best_fragmnt); + + /* update length: rem_fragmnt getting space before border */ + rem_fragmnt->block_index = -1; + rem_fragmnt->start = (*best_fragmnt)->start; + rem_fragmnt->len = border - (uintptr_t)(*best_fragmnt)->start; + (*best_fragmnt)->start = + (void *)((uintptr_t)rem_fragmnt->start + rem_fragmnt->len); + (*best_fragmnt)->len -= rem_fragmnt->len; + } + + /* if this was a perfect fit, i.e. no free space follows, we are done */ + if (remainder == 0) + return (*best_fragmnt)->start; + + /* otherwise, frangment space, i.e. take a new fragment descriptor... */ + rem_fragmnt = ishm_ftbl->unused_fragmnts; + if (!rem_fragmnt) { + ODP_ERR("unable to get shmem fragment descriptor!\n."); + return (*best_fragmnt)->start; + } + ishm_ftbl->unused_fragmnts = rem_fragmnt->next; + + /* ... double link it... */ + rem_fragmnt->next = (*best_fragmnt)->next; + rem_fragmnt->prev = (*best_fragmnt); + if ((*best_fragmnt)->next) + (*best_fragmnt)->next->prev = rem_fragmnt; + (*best_fragmnt)->next = rem_fragmnt; + + /* ... and keep track of the remainder */ + (*best_fragmnt)->len = size; + rem_fragmnt->len = remainder; + rem_fragmnt->start = (void *)((char *)(*best_fragmnt)->start + size); + rem_fragmnt->block_index = -1; + + return (*best_fragmnt)->start; +} + +/* + * Free a portion of virtual space. + * Possibly defragment, if the freed fragment is adjacent to another + * free virtual fragment. + * External caller must ensure mutex before the call! + */ +static void free_fragment(ishm_fragment_t *fragmnt) +{ + ishm_fragment_t *prev_f; + ishm_fragment_t *next_f; + + /* sanity check */ + if (!fragmnt) + return; + + prev_f = fragmnt->prev; + next_f = fragmnt->next; + + /* free the fragment */ + fragmnt->block_index = -1; + + /* check if the previous fragment is also free: if so, defragment */ + if (prev_f && (prev_f->block_index < 0)) { + fragmnt->start = prev_f->start; + fragmnt->len += prev_f->len; + if (prev_f->prev) { + prev_f->prev->next = fragmnt; + } else { + if (ishm_ftbl->used_fragmnts == prev_f) + ishm_ftbl->used_fragmnts = fragmnt; + else + ODP_ERR("corrupted fragment list!.\n"); + } + fragmnt->prev = prev_f->prev; + + /* put removed fragment in free list */ + prev_f->prev = NULL; + prev_f->next = ishm_ftbl->unused_fragmnts; + ishm_ftbl->unused_fragmnts = prev_f; + } + + /* check if the next fragment is also free: if so, defragment */ + if (next_f && (next_f->block_index < 0)) { + fragmnt->len += next_f->len; + if (next_f->next) + next_f->next->prev = fragmnt; + fragmnt->next = next_f->next; + + /* put removed fragment in free list */ + next_f->prev = NULL; + next_f->next = ishm_ftbl->unused_fragmnts; + ishm_ftbl->unused_fragmnts = next_f; + } +} + +/* + * Create file with size len. returns -1 on error + * Creates a file to /tmp/odp-<pid>-<sequence_or_name> (for normal pages) + * or /mnt/huge/odp-<pid>-<sequence_or_name> (for huge pages) + * Return the new file descriptor, or -1 on error. + */ +static int create_file(int block_index, int huge, uint64_t len) +{ + char *name; + int fd; + ishm_block_t *new_block; /* entry in the main block table */ + char seq_string[ISHM_FILENAME_MAXLEN]; /* used to construct filename*/ + char filename[ISHM_FILENAME_MAXLEN];/* filename in /tmp/ or /mnt/huge */ + int oflag = O_RDWR | O_CREAT | O_TRUNC; /* flags for open */ + + new_block = &ishm_tbl->block[block_index]; + name = new_block->name; + + /* create the filename: */ + snprintf(seq_string, ISHM_FILENAME_MAXLEN, "%08" PRIu64, + ishm_tbl->dev_seq++); + + /* huge dir must be known to create files there!: */ + if (huge && !odp_global_data.hugepage_info.default_huge_page_dir) + return -1; + + if (huge) + snprintf(filename, ISHM_FILENAME_MAXLEN, + ISHM_FILENAME_FORMAT, + odp_global_data.hugepage_info.default_huge_page_dir, + odp_global_data.main_pid, + (name && name[0]) ? name : seq_string); + else + snprintf(filename, ISHM_FILENAME_MAXLEN, + ISHM_FILENAME_FORMAT, + ISHM_FILENAME_NORMAL_PAGE_DIR, + odp_global_data.main_pid, + (name && name[0]) ? name : seq_string); + + fd = open(filename, oflag, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); + if (fd < 0) { + ODP_ERR("open failed for %s: %s.\n", + filename, strerror(errno)); + return -1; + } + + if (ftruncate(fd, len) == -1) { + ODP_ERR("ftruncate failed: fd=%d, err=%s.\n", + fd, strerror(errno)); + close(fd); + return -1; + } + + strncpy(new_block->filename, filename, ISHM_FILENAME_MAXLEN - 1); + + return fd; +} + +/* + * performs the mapping, possibly allocating a fragment of the pre-reserved + * VA space if the _ODP_ISHM_SINGLE_VA flag was given. + * Sets fd, and returns the mapping address. + * This funstion will also set the _ODP_ISHM_SINGLE_VA flag if the alignment + * requires it + * Mutex must be assured by the caller. + */ +static void *do_map(int block_index, uint64_t len, uint32_t align, + uint32_t flags, int huge, int *fd) +{ + ishm_block_t *new_block; /* entry in the main block table */ + void *addr = NULL; + void *mapped_addr; + ishm_fragment_t *fragment = NULL; + + new_block = &ishm_tbl->block[block_index]; + + /* + * Creates a file to /tmp/odp-<pid>-<sequence> (for normal pages) + * or /mnt/huge/odp-<pid>-<sequence> (for huge pages) + * unless a fd was already given + */ + if (*fd < 0) { + *fd = create_file(block_index, huge, len); + if (*fd < 0) + return NULL; + } else { + new_block->filename[0] = 0; + } + + /* allocate an address range in the prebooked VA area if needed */ + if (flags & _ODP_ISHM_SINGLE_VA) { + addr = alloc_fragment(len, block_index, align, &fragment); + if (!addr) { + ODP_ERR("alloc_fragment failed.\n"); + if (new_block->filename[0]) { + close(*fd); + *fd = -1; + unlink(new_block->filename); + } + return NULL; + } + ishm_tbl->block[block_index].fragment = fragment; + } + + /* try to mmap: */ + mapped_addr = _odp_ishmphy_map(*fd, addr, len, flags); + if (mapped_addr == NULL) { + if (flags & _ODP_ISHM_SINGLE_VA) + free_fragment(fragment); + if (new_block->filename[0]) { + close(*fd); + *fd = -1; + unlink(new_block->filename); + } + return NULL; + } + + new_block->huge = huge; + + return mapped_addr; +} + +/* + * Performs an extra mapping (for a process trying to see an existing block + * i.e. performing a lookup). + * Mutex must be assured by the caller. + */ +static void *do_remap(int block_index, int fd) +{ + void *mapped_addr; + ishm_fragment_t *fragment; + uint64_t len; + uint32_t flags; + + len = ishm_tbl->block[block_index].len; + flags = ishm_tbl->block[block_index].flags; + + if (flags & _ODP_ISHM_SINGLE_VA) { + fragment = ishm_tbl->block[block_index].fragment; + if (!fragment) { + ODP_ERR("invalid fragment failure.\n"); + return NULL; + } + + /* try to mmap: */ + mapped_addr = _odp_ishmphy_map(fd, fragment->start, len, flags); + if (mapped_addr == NULL) + return NULL; + return mapped_addr; + } + + /* try to mmap: */ + mapped_addr = _odp_ishmphy_map(fd, NULL, len, flags); + if (mapped_addr == NULL) + return NULL; + + return mapped_addr; +} + +/* + * Performs unmapping, possibly freeing a prereserved VA space fragment, + * if the _ODP_ISHM_SINGLE_VA flag was set at alloc time + * Mutex must be assured by the caller. + */ +static int do_unmap(void *start, uint64_t size, uint32_t flags, + int block_index) +{ + int ret; + + if (start) + ret = _odp_ishmphy_unmap(start, size, flags); + else + ret = 0; + + if ((block_index >= 0) && (flags & _ODP_ISHM_SINGLE_VA)) { + /* mark reserved address space as free */ + free_fragment(ishm_tbl->block[block_index].fragment); + } + + return ret; +} + +/* + * Search for a given used and allocated block name. + * (search is performed in the global ishm table) + * Returns the index of the found block (if any) or -1 if none. + * Mutex must be assured by the caller. + */ +static int find_block_by_name(const char *name) +{ + int i; + + if (name == NULL || name[0] == 0) + return -1; + + for (i = 0; i < ISHM_MAX_NB_BLOCKS; i++) { + if ((ishm_tbl->block[i].len) && + (strcmp(name, ishm_tbl->block[i].name) == 0)) + return i; + } + + return -1; +} + +/* + * Search for a block by address (only works when flag _ODP_ISHM_SINGLE_VA + * was set at reserve() time, or if the block is already known by this + * process). + * Search is performed in the process table and in the global ishm table. + * The provided address does not have to be at start: any address + * within the fragment is OK. + * Returns the index to the found block (if any) or -1 if none. + * Mutex must be assured by the caller. + */ +static int find_block_by_address(void *addr) +{ + int block_index; + int i; + ishm_fragment_t *fragmnt; + + /* + * first check if there is already a process known block for this + * address + */ + for (i = 0; i < ishm_proctable->nb_entries; i++) { + block_index = ishm_proctable->entry[i].block_index; + if ((addr > ishm_proctable->entry[i].start) && + ((char *)addr < ((char *)ishm_proctable->entry[i].start + + ishm_tbl->block[block_index].len))) + return block_index; + } + + /* + * then check if there is a existing single VA block known by some other + * process and containing the given address + */ + for (i = 0; i < ISHM_MAX_NB_BLOCKS; i++) { + if ((!ishm_tbl->block[i].len) || + (!(ishm_tbl->block[i].flags & _ODP_ISHM_SINGLE_VA))) + continue; + fragmnt = ishm_tbl->block[i].fragment; + if (!fragmnt) { + ODP_ERR("find_fragment: invalid NULL fragment\n"); + return -1; + } + if ((addr >= fragmnt->start) && + ((char *)addr < ((char *)fragmnt->start + fragmnt->len))) + return i; + } + + /* address does not belong to any accessible block: */ + return -1; +} + +/* + * Search a given ishm block in the process local table. Return its index + * in the process table or -1 if not found (meaning that the ishm table + * block index was not referenced in the process local table, i.e. the + * block is known by some other process, but not by the current process). + * Caller must assure mutex. + */ +static int procfind_block(int block_index) +{ + int i; + + for (i = 0; i < ishm_proctable->nb_entries; i++) { + if (ishm_proctable->entry[i].block_index == block_index) + return i; + } + return -1; +} + +/* + * Release the physical memory mapping for blocks which have been freed + * by other processes. Caller must ensure mutex. + * Mutex must be assured by the caller. + */ +static void procsync(void) +{ + int i = 0; + int last; + ishm_block_t *block; + + last = ishm_proctable->nb_entries; + while (i < last) { + /* if the procecess sequence number doesn't match the main + * table seq number, this entry is obsolete + */ + block = &ishm_tbl->block[ishm_proctable->entry[i].block_index]; + if (ishm_proctable->entry[i].seq != block->seq) { + /* obsolete entry: free memory and remove proc entry */ + close(ishm_proctable->entry[i].fd); + _odp_ishmphy_unmap(ishm_proctable->entry[i].start, + ishm_proctable->entry[i].len, + ishm_proctable->entry[i].flags); + ishm_proctable->entry[i] = + ishm_proctable->entry[--last]; + } else { + i++; + } + } + ishm_proctable->nb_entries = last; +} + +/* + * Allocate and map internal shared memory, or other objects: + * If a name is given, check that this name is not already in use. + * If ok, allocate a new shared memory block and map the + * provided fd in it (if fd >=0 was given). + * If no fd is provided, a shared memory file desc named + * /tmp/odp-<pid>-ishm-<name_or_sequence> is created and mapped. + * (the name is different for huge page file as they must be on hugepagefs) + * The function returns the index of the newly created block in the + * main block table (>=0) or -1 on error. + */ +int _odp_ishm_reserve(const char *name, uint64_t size, int fd, + uint32_t align, uint32_t flags, uint32_t user_flags) +{ + int new_index; /* index in the main block table*/ + ishm_block_t *new_block; /* entry in the main block table*/ + uint64_t page_sz; /* normal page size. usually 4K*/ + uint64_t alloc_size; /* includes extra for alignement*/ + uint64_t page_hp_size; /* huge page size */ + uint64_t alloc_hp_size; /* includes extra for alignement*/ + uint32_t hp_align; + uint64_t len; /* mapped length */ + void *addr = NULL; /* mapping address */ + int new_proc_entry; + + page_sz = odp_sys_page_size(); + + odp_spinlock_lock(&ishm_tbl->lock); + + /* update this process view... */ + procsync(); + + /* roundup to page size */ + alloc_size = (size + (page_sz - 1)) & (-page_sz); + + page_hp_size = odp_sys_huge_page_size(); + /* roundup to page size */ + alloc_hp_size = (size + (page_hp_size - 1)) & (-page_hp_size); + + /* check if name already exists */ + if (name && (find_block_by_name(name) >= 0)) { + /* Found a block with the same name */ + odp_spinlock_unlock(&ishm_tbl->lock); + ODP_ERR("name "%s" already used.\n", name); + return -1; + } + + /* grab a new entry: */ + for (new_index = 0; new_index < ISHM_MAX_NB_BLOCKS; new_index++) { + if (ishm_tbl->block[new_index].len == 0) { + /* Found free block */ + break; + } + } + + /* check if we have reached the maximum number of allocation: */ + if (new_index >= ISHM_MAX_NB_BLOCKS) { + odp_spinlock_unlock(&ishm_tbl->lock); + ODP_ERR("ISHM_MAX_NB_BLOCKS limit reached!\n"); + return -1; + } + + new_block = &ishm_tbl->block[new_index]; + + /* save block name (if any given): */ + if (name) + strncpy(new_block->name, name, ISHM_NAME_MAXLEN - 1); + else + new_block->name[0] = 0; + + /* Try first huge pages when possible and needed: */ + if (page_hp_size && (alloc_size > page_sz)) { + /* at least, alignment in VA should match page size, but user + * can request more: If the user requirement exceeds the page + * size then we have to make sure the block will be mapped at + * the same address every where, otherwise alignment may be + * be wrong for some process */ + hp_align = align; + if (hp_align < odp_sys_huge_page_size()) + hp_align = odp_sys_huge_page_size(); + else + flags |= _ODP_ISHM_SINGLE_VA; + len = alloc_hp_size; + addr = do_map(new_index, len, hp_align, flags, 1, &fd); + + if (addr == NULL) + ODP_DBG("No huge pages, fall back to normal pages, " + "check: /proc/sys/vm/nr_hugepages.\n"); + else + new_block->huge = 1; + } + + /* try normal pages if huge pages failed */ + if (addr == NULL) { + /* at least, alignment in VA should match page size, but user + * can request more: If the user requirement exceeds the page + * size then we have to make sure the block will be mapped at + * the same address every where, otherwise alignment may be + * be wrong for some process */ + if (align < odp_sys_page_size()) + align = odp_sys_page_size(); + else + flags |= _ODP_ISHM_SINGLE_VA; + + len = alloc_size; + addr = do_map(new_index, len, align, flags, 0, &fd); + new_block->huge = 0; + } + + /* if neither huge pages or normal pages works, we cannot proceed: */ + if ((addr == NULL) || (len == 0)) { + if ((new_block->filename[0]) && (fd >= 0)) + close(fd); + odp_spinlock_unlock(&ishm_tbl->lock); + ODP_ERR("_ishm_reserve failed.\n"); + return -1; + } + + /* remember block data and increment block seq number to mark change */ + new_block->len = len; + new_block->user_len = size; + new_block->flags = flags; + new_block->user_flags = user_flags; + new_block->seq++; + new_block->refcnt = 1; + new_block->main_odpthread = odp_thread_id(); + new_block->start = addr; /* only for SINGLE_VA*/ + + /* the allocation succeeded: update the process local view */ + new_proc_entry = ishm_proctable->nb_entries++; + ishm_proctable->entry[new_proc_entry].block_index = new_index; + ishm_proctable->entry[new_proc_entry].flags = flags; + ishm_proctable->entry[new_proc_entry].seq = new_block->seq; + ishm_proctable->entry[new_proc_entry].start = addr; + ishm_proctable->entry[new_proc_entry].len = len; + ishm_proctable->entry[new_proc_entry].fd = fd; + + /* register the file descriptor to the file descriptor server. */ + _odp_fdserver_register_fd(FD_SRV_CTX_ISHM, new_index, fd); + + odp_spinlock_unlock(&ishm_tbl->lock); + return new_index; +} + +/* + * Free and unmap internal shared memory: + * The file descriptor is closed and the .../odp-* file deleted, + * unless fd was externally provided at reserve() time. + * return 0 if OK, and -1 on error. + * Mutex must be assured by the caller. + */ +static int block_free(int block_index) +{ + int proc_index; + ishm_block_t *block; /* entry in the main block table*/ + int last; + + if ((block_index < 0) || + (block_index >= ISHM_MAX_NB_BLOCKS) || + (ishm_tbl->block[block_index].len == 0)) { + ODP_ERR("Request to free an invalid block\n"); + return -1; + } + + block = &ishm_tbl->block[block_index]; + + proc_index = procfind_block(block_index); + if (proc_index >= 0) { + /* close the fd, unless if it was externaly provided */ + if ((block->filename[0] != 0) || + (odp_thread_id() != block->main_odpthread)) + close(ishm_proctable->entry[proc_index].fd); + + /* remove the mapping and possible fragment */ + do_unmap(ishm_proctable->entry[proc_index].start, + block->len, + ishm_proctable->entry[proc_index].flags, + block_index); + + /* remove entry from process local table: */ + last = ishm_proctable->nb_entries - 1; + ishm_proctable->entry[proc_index] = + ishm_proctable->entry[last]; + ishm_proctable->nb_entries = last; + } else { + /* just possibly free the fragment as no mapping exist here: */ + do_unmap(NULL, 0, block->flags, block_index); + } + + /* remove the .../odp-* file, unless fd was external: */ + if (block->filename[0] != 0) + unlink(block->filename); + + /* deregister the file descriptor from the file descriptor server. */ + _odp_fdserver_deregister_fd(FD_SRV_CTX_ISHM, block_index); + + /* mark the block as free in the main block table: */ + block->len = 0; + + /* mark the change so other processes see this entry as obsolete: */ + block->seq++; + + return 0; +} + +/* + * Free and unmap internal shared memory, intentified by its block number: + * return -1 on error. 0 if OK. + */ +int _odp_ishm_free_by_index(int block_index) +{ + int ret; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + ret = block_free(block_index); + odp_spinlock_unlock(&ishm_tbl->lock); + return ret; +} + +/* + * free and unmap internal shared memory, intentified by its block name: + * return -1 on error. 0 if OK. + */ +int _odp_ishm_free_by_name(const char *name) +{ + int block_index; + int ret; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + /* search the block in main ishm table */ + block_index = find_block_by_name(name); + if (block_index < 0) { + ODP_ERR("Request to free an non existing block..." + " (double free?)\n"); + odp_spinlock_unlock(&ishm_tbl->lock); + return -1; + } + + ret = block_free(block_index); + odp_spinlock_unlock(&ishm_tbl->lock); + return ret; +} + +/* + * Free and unmap internal shared memory identified by address: + * return -1 on error. 0 if OK. + */ +int _odp_ishm_free_by_address(void *addr) +{ + int block_index; + int ret; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + /* search the block in main ishm table */ + block_index = find_block_by_address(addr); + if (block_index < 0) { + ODP_ERR("Request to free an non existing block..." + " (double free?)\n"); + odp_spinlock_unlock(&ishm_tbl->lock); + return -1; + } + + ret = block_free(block_index); + + odp_spinlock_unlock(&ishm_tbl->lock); + return ret; +} + +/* + * Lookup for an ishm shared memory, identified by its block index + * in the main ishm block table. + * Map this ishm area in the process VA (if not already present). + * Returns the block user address or NULL on error. + * Mutex must be assured by the caller. + */ +static void *block_lookup(int block_index) +{ + int proc_index; + int fd = -1; + ishm_block_t *block; + void *mapped_addr; + int new_entry; + + if ((block_index < 0) || + (block_index >= ISHM_MAX_NB_BLOCKS) || + (ishm_tbl->block[block_index].len == 0)) { + ODP_ERR("Request to lookup an invalid block\n"); + return NULL; + } + + /* search it in process table: if there, this process knows it already*/ + proc_index = procfind_block(block_index); + if (proc_index >= 0) + return ishm_proctable->entry[proc_index].start; + + /* this ishm is not known by this process, yet: we create the mapping.*/ + fd = _odp_fdserver_lookup_fd(FD_SRV_CTX_ISHM, block_index); + if (fd < 0) { + ODP_ERR("Could not find ishm file descriptor (BUG!)\n"); + return NULL; + } + + /* perform the mapping */ + block = &ishm_tbl->block[block_index]; + + mapped_addr = do_remap(block_index, fd); + if (mapped_addr == NULL) { + ODP_ERR(" lookup: Could not map existing shared memory!\n"); + return NULL; + } + + /* the mapping succeeded: update the process local view */ + new_entry = ishm_proctable->nb_entries++; + ishm_proctable->entry[new_entry].block_index = block_index; + ishm_proctable->entry[new_entry].flags = block->flags; + ishm_proctable->entry[new_entry].seq = block->seq; + ishm_proctable->entry[new_entry].start = mapped_addr; + ishm_proctable->entry[new_entry].len = block->len; + ishm_proctable->entry[new_entry].fd = fd; + block->refcnt++; + + return mapped_addr; +} + +/* + * Lookup for an ishm shared memory, identified by its block_index. + * Maps this ishmem area in the process VA (if not already present). + * Returns the block user address, or NULL if the index + * does not match any known ishm blocks. + */ +void *_odp_ishm_lookup_by_index(int block_index) +{ + void *ret; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + ret = block_lookup(block_index); + odp_spinlock_unlock(&ishm_tbl->lock); + return ret; +} + +/* + * Lookup for an ishm shared memory, identified by its block name. + * Map this ishm area in the process VA (if not already present). + * Return the block index, or -1 if the index + * does not match any known ishm blocks. + */ +int _odp_ishm_lookup_by_name(const char *name) +{ + int block_index; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + /* search the block in main ishm table: return -1 if not found: */ + block_index = find_block_by_name(name); + if ((block_index < 0) || (!block_lookup(block_index))) { + odp_spinlock_unlock(&ishm_tbl->lock); + return -1; + } + + odp_spinlock_unlock(&ishm_tbl->lock); + return block_index; +} + +/* + * Lookup for an ishm shared memory block, identified by its VA address. + * This works only if the block has already been looked-up (mapped) by the + * current process or it it was created with the _ODP_ISHM_SINGLE_VA flag. + * Map this ishm area in the process VA (if not already present). + * Return the block index, or -1 if the address + * does not match any known ishm blocks. + */ +int _odp_ishm_lookup_by_address(void *addr) +{ + int block_index; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + /* search the block in main ishm table: return -1 if not found: */ + block_index = find_block_by_address(addr); + if ((block_index < 0) || (!block_lookup(block_index))) { + odp_spinlock_unlock(&ishm_tbl->lock); + return -1; + } + + odp_spinlock_unlock(&ishm_tbl->lock); + return block_index; +} + +/* + * Returns the VA address of a given block (which has to be known in the current + * process). Returns NULL if the block is unknown. + */ +void *_odp_ishm_address(int block_index) +{ + int proc_index; + void *addr; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + if ((block_index < 0) || + (block_index >= ISHM_MAX_NB_BLOCKS) || + (ishm_tbl->block[block_index].len == 0)) { + ODP_ERR("Request for address on an invalid block\n"); + odp_spinlock_unlock(&ishm_tbl->lock); + return NULL; + } + + proc_index = procfind_block(block_index); + if (proc_index < 0) { + odp_spinlock_unlock(&ishm_tbl->lock); + return NULL; + } + + addr = ishm_proctable->entry[proc_index].start; + odp_spinlock_unlock(&ishm_tbl->lock); + return addr; +} + +int _odp_ishm_info(int block_index, _odp_ishm_info_t *info) +{ + int proc_index; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + if ((block_index < 0) || + (block_index >= ISHM_MAX_NB_BLOCKS) || + (ishm_tbl->block[block_index].len == 0)) { + odp_spinlock_unlock(&ishm_tbl->lock); + ODP_ERR("Request for info on an invalid block\n"); + return -1; + } + + /* search it in process table: if not there, need to map*/ + proc_index = procfind_block(block_index); + if (proc_index < 0) { + odp_spinlock_unlock(&ishm_tbl->lock); + return -1; + } + + info->name = ishm_tbl->block[block_index].name; + info->addr = ishm_proctable->entry[proc_index].start; + info->size = ishm_tbl->block[block_index].user_len; + info->page_size = ishm_tbl->block[block_index].huge ? + odp_sys_huge_page_size() : odp_sys_page_size(); + info->flags = ishm_tbl->block[block_index].flags; + info->user_flags = ishm_tbl->block[block_index].user_flags; + + odp_spinlock_unlock(&ishm_tbl->lock); + return 0; +} + +int _odp_ishm_init_global(void) +{ + void *addr; + void *spce_addr; + int i; + + if (!odp_global_data.hugepage_info.default_huge_page_dir) + ODP_DBG("NOTE: No support for huge pages\n"); + else + ODP_DBG("Huge pages mount point is: %s\n", + odp_global_data.hugepage_info.default_huge_page_dir); + + /* allocate space for the internal shared mem block table: */ + addr = mmap(NULL, sizeof(ishm_table_t), + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); + if (addr == MAP_FAILED) { + ODP_ERR("unable to mmap the main block table\n."); + goto init_glob_err1; + } + ishm_tbl = addr; + memset(ishm_tbl, 0, sizeof(ishm_table_t)); + ishm_tbl->dev_seq = 0; + odp_spinlock_init(&ishm_tbl->lock); + + /* allocate space for the internal shared mem fragment table: */ + addr = mmap(NULL, sizeof(ishm_ftable_t), + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); + if (addr == MAP_FAILED) { + ODP_ERR("unable to mmap the main fragment table\n."); + goto init_glob_err2; + } + ishm_ftbl = addr; + memset(ishm_ftbl, 0, sizeof(ishm_ftable_t)); + + /* + *reserve the address space for _ODP_ISHM_SINGLE_VA reserved blocks, + * only address space! + */ + spce_addr = _odp_ishmphy_book_va(ODP_CONFIG_ISHM_VA_PREALLOC_SZ, + odp_sys_huge_page_size()); + if (!spce_addr) { + ODP_ERR("unable to reserve virtual space\n."); + goto init_glob_err3; + } + + /* use the first fragment descriptor to describe to whole VA space: */ + ishm_ftbl->fragment[0].block_index = -1; + ishm_ftbl->fragment[0].start = spce_addr; + ishm_ftbl->fragment[0].len = ODP_CONFIG_ISHM_VA_PREALLOC_SZ; + ishm_ftbl->fragment[0].prev = NULL; + ishm_ftbl->fragment[0].next = NULL; + ishm_ftbl->used_fragmnts = &ishm_ftbl->fragment[0]; + + /* and put all other fragment descriptors in the unused list: */ + for (i = 1; i < ISHM_NB_FRAGMNTS - 1; i++) { + ishm_ftbl->fragment[i].prev = NULL; + ishm_ftbl->fragment[i].next = &ishm_ftbl->fragment[i + 1]; + } + ishm_ftbl->fragment[ISHM_NB_FRAGMNTS - 1].prev = NULL; + ishm_ftbl->fragment[ISHM_NB_FRAGMNTS - 1].next = NULL; + ishm_ftbl->unused_fragmnts = &ishm_ftbl->fragment[1]; + + return 0; + +init_glob_err3: + if (munmap(ishm_ftbl, sizeof(ishm_ftable_t)) < 0) + ODP_ERR("unable to munmap main fragment table\n."); +init_glob_err2: + if (munmap(ishm_tbl, sizeof(ishm_table_t)) < 0) + ODP_ERR("unable to munmap main block table\n."); +init_glob_err1: + return -1; +} + +int _odp_ishm_init_local(void) +{ + int i; + int block_index; + + /* + * the ishm_process table is local to each linux process + * Check that no other linux threads (of same or ancestor processes) + * have already created the table, and create it if needed. + * We protect this with the general ishm lock to avoid + * init race condition of different running threads. + */ + odp_spinlock_lock(&ishm_tbl->lock); + if (!ishm_proctable) { + ishm_proctable = malloc(sizeof(ishm_proctable_t)); + if (!ishm_proctable) { + odp_spinlock_unlock(&ishm_tbl->lock); + return -1; + } + memset(ishm_proctable, 0, sizeof(ishm_proctable_t)); + } + if (syscall(SYS_gettid) != getpid()) + ishm_proctable->thrd_refcnt++; /* new linux thread */ + else + ishm_proctable->thrd_refcnt = 1;/* new linux process */ + + /* + * if this ODP thread is actually a new linux process, (as opposed + * to a pthread), i.e, we just forked, then all shmem blocks + * of the parent process are mapped into this child by inheritance. + * (The process local table is inherited as well). We hence have to + * increase the process refcount for each of the inherited mappings: + */ + if (syscall(SYS_gettid) == getpid()) { + for (i = 0; i < ishm_proctable->nb_entries; i++) { + block_index = ishm_proctable->entry[i].block_index; + ishm_tbl->block[block_index].refcnt++; + } + } + + odp_spinlock_unlock(&ishm_tbl->lock); + return 0; +} + +int _odp_ishm_term_global(void) +{ + int ret = 0; + + /* free the fragment table */ + if (munmap(ishm_ftbl, sizeof(ishm_ftable_t)) < 0) { + ret = -1; + ODP_ERR("unable to munmap fragment table\n."); + } + /* free the block table */ + if (munmap(ishm_tbl, sizeof(ishm_table_t)) < 0) { + ret = -1; + ODP_ERR("unable to munmap main table\n."); + } + + /* free the reserved VA space */ + if (_odp_ishmphy_unbook_va()) + ret = -1; + + return ret; +} + +int _odp_ishm_term_local(void) +{ + int i; + int proc_table_refcnt = 0; + int block_index; + ishm_block_t *block; + + odp_spinlock_lock(&ishm_tbl->lock); + procsync(); + + /* + * The ishm_process table is local to each linux process + * Check that no other linux threads (of this linux process) + * still needs the table, and free it if so. + * We protect this with the general ishm lock to avoid + * term race condition of different running threads. + */ + proc_table_refcnt = --ishm_proctable->thrd_refcnt; + if (!proc_table_refcnt) { + /* + * this is the last thread of this process... + * All mappings for this process are about to be lost... + * Go through the table of visible blocks for this process, + * decreasing the refcnt of each visible blocks, and issuing + * warning for those no longer referenced by any process. + * Note that non-referenced blocks are nor freeed: this is + * deliberate as this would imply that the sementic of the + * freeing function would differ depending on whether we run + * with odp_thread as processes or pthreads. With this approach, + * the user should always free the blocks manually, which is + * more consistent + */ + for (i = 0; i < ishm_proctable->nb_entries; i++) { + block_index = ishm_proctable->entry[i].block_index; + block = &ishm_tbl->block[block_index]; + if ((--block->refcnt) <= 0) { + block->refcnt = 0; + ODP_DBG("Warning: block %d: name:%s " + "no longer referenced\n", + i, + ishm_tbl->block[i].name[0] ? + ishm_tbl->block[i].name : "<no name>"); + } + } + + free(ishm_proctable); + ishm_proctable = NULL; + } + + odp_spinlock_unlock(&ishm_tbl->lock); + return 0; +} diff --git a/platform/linux-generic/_ishmphy.c b/platform/linux-generic/_ishmphy.c new file mode 100644 index 0000000..2b2d100 --- /dev/null +++ b/platform/linux-generic/_ishmphy.c @@ -0,0 +1,185 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/* + * This file handles the lower end of the ishm memory allocator: + * It performs the physical mappings. + */ +#include <odp_posix_extensions.h> +#include <odp_config_internal.h> +#include <odp_internal.h> +#include <odp/api/align.h> +#include <odp/api/system_info.h> +#include <odp/api/debug.h> +#include <odp_debug_internal.h> +#include <odp_align_internal.h> +#include <_ishm_internal.h> +#include <_ishmphy_internal.h> + +#include <stdlib.h> +#include <stdio.h> +#include <unistd.h> +#include <string.h> +#include <errno.h> +#include <sys/mman.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <sys/types.h> +#include <sys/wait.h> +#include <_ishmphy_internal.h> + +static void *common_va_address; +static uint64_t common_va_len; + +#ifndef MAP_ANONYMOUS +#define MAP_ANONYMOUS MAP_ANON +#endif + +/* Book some virtual address space + * This function is called at odp_init_global() time to pre-book some + * virtual address space inherited by all odpthreads (i.e. descendant + * processes and threads) and later used to guarantee the unicity the + * the mapping VA address when memory is reserver with the _ODP_ISHM_SINGLE_VA + * flag. + * returns the address of the mapping or NULL on error. + */ +void *_odp_ishmphy_book_va(uintptr_t len, intptr_t align) +{ + void *addr; + + addr = mmap(NULL, len + align, PROT_NONE, + MAP_SHARED | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + if (addr == MAP_FAILED) { + ODP_ERR("_ishmphy_book_va failure\n"); + return NULL; + } + + if (mprotect(addr, len, PROT_NONE)) + ODP_ERR("failure for protect\n"); + + ODP_DBG("VA Reserved: %p, len=%p\n", addr, len + align); + + common_va_address = addr; + common_va_len = len; + + /* return the nearest aligned address: */ + return (void *)(((uintptr_t)addr + align - 1) & (-align)); +} + +/* Un-book some virtual address space + * This function is called at odp_term_global() time to unbook + * the virtual address space booked by _ishmphy_book_va() + */ +int _odp_ishmphy_unbook_va(void) +{ + int ret; + + ret = munmap(common_va_address, common_va_len); + if (ret) + ODP_ERR("_unishmphy_book_va failure\n"); + return ret; +} + +/* + * do a mapping: + * Performs a mapping of the provided file descriptor to the process VA + * space. If the _ODP_ISHM_SINGLE_VA flag is set, 'start' is assumed to be + * the VA address where the mapping is to be done. + * If the flag is not set, a new VA address is taken. + * returns the address of the mapping or NULL on error. + */ +void *_odp_ishmphy_map(int fd, void *start, uint64_t size, + int flags) +{ + void *mapped_addr; + int mmap_flags = 0; + + if (flags & _ODP_ISHM_SINGLE_VA) { + if (!start) { + ODP_ERR("failure: missing address\n"); + return NULL; + } + /* maps over fragment of reserved VA: */ + mapped_addr = mmap(start, size, PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_FIXED | mmap_flags, fd, 0); + /* if mapping fails, re-block the space we tried to take + * as it seems a mapping failure still affect what was there??*/ + if (mapped_addr == MAP_FAILED) { + mmap_flags = MAP_SHARED | MAP_FIXED | + MAP_ANONYMOUS | MAP_NORESERVE; + mmap(start, size, PROT_NONE, mmap_flags, -1, 0); + mprotect(start, size, PROT_NONE); + } + } else { + /* just do a new mapping in the VA space: */ + mapped_addr = mmap(NULL, size, PROT_READ | PROT_WRITE, + MAP_SHARED | mmap_flags, fd, 0); + if ((mapped_addr >= common_va_address) && + ((char *)mapped_addr < + (char *)common_va_address + common_va_len)) { + ODP_ERR("VA SPACE OVERLAP!\n"); + } + } + + if (mapped_addr == MAP_FAILED) { + ODP_ERR("mmap failed:%s\n", strerror(errno)); + return NULL; + } + + /* if locking is requested, lock it...*/ + if (flags & _ODP_ISHM_LOCK) { + if (mlock(mapped_addr, size)) { + if (munmap(mapped_addr, size)) + ODP_ERR("munmap failed:%s\n", strerror(errno)); + ODP_ERR("mlock failed:%s\n", strerror(errno)); + return NULL; + } + } + return mapped_addr; +} + +/* free a mapping: + * If the _ODP_ISHM_SINGLE_VA flag was given at creation time the virtual + * address range must be returned to the preoallocated "pool". this is + * done by mapping non accessibly memory there (hence blocking the VA but + * releasing the physical memory). + * If the _ODP_ISHM_SINGLE_VA flag was not given, both physical memory and + * virtual address space are realeased by calling the normal munmap. + * return 0 on success or -1 on error. + */ +int _odp_ishmphy_unmap(void *start, uint64_t len, int flags) +{ + void *addr; + int ret; + int mmap_flgs; + + mmap_flgs = MAP_SHARED | MAP_FIXED | MAP_ANONYMOUS | MAP_NORESERVE; + + /* if locking was requested, unlock...*/ + if (flags & _ODP_ISHM_LOCK) + munlock(start, len); + + if (flags & _ODP_ISHM_SINGLE_VA) { + /* map unnaccessible memory overwrites previous mapping + * and free the physical memory, but guarantees to block + * the VA range from other mappings + */ + addr = mmap(start, len, PROT_NONE, mmap_flgs, -1, 0); + if (addr == MAP_FAILED) { + ODP_ERR("_ishmphy_free failure for ISHM_SINGLE_VA\n"); + return -1; + } + if (mprotect(start, len, PROT_NONE)) + ODP_ERR("_ishmphy_free failure for protect\n"); + return 0; + } + + /* just release the mapping */ + ret = munmap(start, len); + if (ret) + ODP_ERR("_ishmphy_free failure: %s\n", strerror(errno)); + return ret; +} diff --git a/platform/linux-generic/arch/arm/odp/api/cpu_arch.h b/platform/linux-generic/arch/arm/odp/api/cpu_arch.h deleted file mode 120000 index e86e132..0000000 --- a/platform/linux-generic/arch/arm/odp/api/cpu_arch.h +++ /dev/null @@ -1 +0,0 @@ -../../../default/odp/api/cpu_arch.h \ No newline at end of file diff --git a/platform/linux-generic/arch/arm/odp/api/cpu_arch.h b/platform/linux-generic/arch/arm/odp/api/cpu_arch.h new file mode 100644 index 0000000..22b1da2 --- /dev/null +++ b/platform/linux-generic/arch/arm/odp/api/cpu_arch.h @@ -0,0 +1,24 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_PLAT_CPU_ARCH_H_ +#define ODP_PLAT_CPU_ARCH_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#define _ODP_CACHE_LINE_SIZE 64 + +static inline void odp_cpu_pause(void) +{ +} + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/arch/arm/odp_cpu_arch.c b/platform/linux-generic/arch/arm/odp_cpu_arch.c deleted file mode 120000 index deebc47..0000000 --- a/platform/linux-generic/arch/arm/odp_cpu_arch.c +++ /dev/null @@ -1 +0,0 @@ -../default/odp_cpu_arch.c \ No newline at end of file diff --git a/platform/linux-generic/arch/arm/odp_cpu_arch.c b/platform/linux-generic/arch/arm/odp_cpu_arch.c new file mode 100644 index 0000000..2ac223e --- /dev/null +++ b/platform/linux-generic/arch/arm/odp_cpu_arch.c @@ -0,0 +1,48 @@ +/* Copyright (c) 2015, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include <odp_posix_extensions.h> + +#include <stdlib.h> +#include <time.h> + +#include <odp/api/cpu.h> +#include <odp/api/hints.h> +#include <odp/api/system_info.h> +#include <odp_debug_internal.h> + +#define GIGA 1000000000 + +uint64_t odp_cpu_cycles(void) +{ + struct timespec time; + uint64_t sec, ns, hz, cycles; + int ret; + + ret = clock_gettime(CLOCK_MONOTONIC_RAW, &time); + + if (ret != 0) + ODP_ABORT("clock_gettime failed\n"); + + hz = odp_cpu_hz_max(); + sec = (uint64_t)time.tv_sec; + ns = (uint64_t)time.tv_nsec; + + cycles = sec * hz; + cycles += (ns * hz) / GIGA; + + return cycles; +} + +uint64_t odp_cpu_cycles_max(void) +{ + return UINT64_MAX; +} + +uint64_t odp_cpu_cycles_resolution(void) +{ + return 1; +} diff --git a/platform/linux-generic/arch/arm/odp_sysinfo_parse.c b/platform/linux-generic/arch/arm/odp_sysinfo_parse.c deleted file mode 120000 index 39962b8..0000000 --- a/platform/linux-generic/arch/arm/odp_sysinfo_parse.c +++ /dev/null @@ -1 +0,0 @@ -../default/odp_sysinfo_parse.c \ No newline at end of file diff --git a/platform/linux-generic/arch/arm/odp_sysinfo_parse.c b/platform/linux-generic/arch/arm/odp_sysinfo_parse.c new file mode 100644 index 0000000..53e2aae --- /dev/null +++ b/platform/linux-generic/arch/arm/odp_sysinfo_parse.c @@ -0,0 +1,27 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include <odp_internal.h> +#include <odp_debug_internal.h> +#include <string.h> + +int cpuinfo_parser(FILE *file ODP_UNUSED, system_info_t *sysinfo) +{ + int i; + + ODP_DBG("Warning: use dummy values for freq and model string\n"); + for (i = 0; i < MAX_CPU_NUMBER; i++) { + sysinfo->cpu_hz_max[i] = 1400000000; + strcpy(sysinfo->model_str[i], "UNKNOWN"); + } + + return 0; +} + +uint64_t odp_cpu_hz_current(int id ODP_UNUSED) +{ + return 0; +} diff --git a/platform/linux-generic/arch/powerpc/odp_cpu_arch.c b/platform/linux-generic/arch/powerpc/odp_cpu_arch.c deleted file mode 120000 index deebc47..0000000 --- a/platform/linux-generic/arch/powerpc/odp_cpu_arch.c +++ /dev/null @@ -1 +0,0 @@ -../default/odp_cpu_arch.c \ No newline at end of file diff --git a/platform/linux-generic/arch/powerpc/odp_cpu_arch.c b/platform/linux-generic/arch/powerpc/odp_cpu_arch.c new file mode 100644 index 0000000..2ac223e --- /dev/null +++ b/platform/linux-generic/arch/powerpc/odp_cpu_arch.c @@ -0,0 +1,48 @@ +/* Copyright (c) 2015, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include <odp_posix_extensions.h> + +#include <stdlib.h> +#include <time.h> + +#include <odp/api/cpu.h> +#include <odp/api/hints.h> +#include <odp/api/system_info.h> +#include <odp_debug_internal.h> + +#define GIGA 1000000000 + +uint64_t odp_cpu_cycles(void) +{ + struct timespec time; + uint64_t sec, ns, hz, cycles; + int ret; + + ret = clock_gettime(CLOCK_MONOTONIC_RAW, &time); + + if (ret != 0) + ODP_ABORT("clock_gettime failed\n"); + + hz = odp_cpu_hz_max(); + sec = (uint64_t)time.tv_sec; + ns = (uint64_t)time.tv_nsec; + + cycles = sec * hz; + cycles += (ns * hz) / GIGA; + + return cycles; +} + +uint64_t odp_cpu_cycles_max(void) +{ + return UINT64_MAX; +} + +uint64_t odp_cpu_cycles_resolution(void) +{ + return 1; +} diff --git a/platform/linux-generic/include/_fdserver_internal.h b/platform/linux-generic/include/_fdserver_internal.h index 480ac02..22b2802 100644 --- a/platform/linux-generic/include/_fdserver_internal.h +++ b/platform/linux-generic/include/_fdserver_internal.h @@ -23,6 +23,7 @@ extern "C" { */ typedef enum fd_server_context { FD_SRV_CTX_NA, /* Not Applicable */ + FD_SRV_CTX_ISHM, FD_SRV_CTX_END, /* upper enum limit */ } fd_server_context_e;
diff --git a/platform/linux-generic/include/_ishm_internal.h b/platform/linux-generic/include/_ishm_internal.h new file mode 100644 index 0000000..7d27477 --- /dev/null +++ b/platform/linux-generic/include/_ishm_internal.h @@ -0,0 +1,45 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_ISHM_INTERNAL_H_ +#define ODP_ISHM_INTERNAL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/* flags available at ishm_reserve: */ +#define _ODP_ISHM_SINGLE_VA 1 +#define _ODP_ISHM_LOCK 2 + +/** + * Shared memory block info + */ +typedef struct _odp_ishm_info_t { + const char *name; /**< Block name */ + void *addr; /**< Block address */ + uint64_t size; /**< Block size in bytes */ + uint64_t page_size; /**< Memory page size */ + uint32_t flags; /**< _ODP_ISHM_* flags */ + uint32_t user_flags;/**< user specific flags */ +} _odp_ishm_info_t; + +int _odp_ishm_reserve(const char *name, uint64_t size, int fd, uint32_t align, + uint32_t flags, uint32_t user_flags); +int _odp_ishm_free_by_index(int block_index); +int _odp_ishm_free_by_name(const char *name); +int _odp_ishm_free_by_address(void *addr); +void *_odp_ishm_lookup_by_index(int block_index); +int _odp_ishm_lookup_by_name(const char *name); +int _odp_ishm_lookup_by_address(void *addr); +void *_odp_ishm_address(int block_index); +int _odp_ishm_info(int block_index, _odp_ishm_info_t *info); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/include/_ishmphy_internal.h b/platform/linux-generic/include/_ishmphy_internal.h new file mode 100644 index 0000000..4fe560f --- /dev/null +++ b/platform/linux-generic/include/_ishmphy_internal.h @@ -0,0 +1,25 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef _ISHMPHY_INTERNAL_H +#define _ISHMPHY_INTERNAL_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +void *_odp_ishmphy_book_va(uintptr_t len, intptr_t align); +int _odp_ishmphy_unbook_va(void); +void *_odp_ishmphy_map(int fd, void *start, uint64_t size, int flags); +int _odp_ishmphy_unmap(void *start, uint64_t len, int flags); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/include/ishmphy_internal.h b/platform/linux-generic/include/ishmphy_internal.h new file mode 100644 index 0000000..0bc4207 --- /dev/null +++ b/platform/linux-generic/include/ishmphy_internal.h @@ -0,0 +1,24 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef _ISHMPHY_INTERNAL_H_ +#define _ISHMPHY_INTERNAL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +void *_ishmphy_book_va(uint64_t len); +int _ishmphy_unbook_va(void); +void *_ishmphy_map(int fd, void *start, uint64_t size, + int flags, int mmap_flags); +int _ishmphy_unmap(void *start, uint64_t len, int flags); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/include/odp_config_internal.h b/platform/linux-generic/include/odp_config_internal.h index 06550e6..e89a6a3 100644 --- a/platform/linux-generic/include/odp_config_internal.h +++ b/platform/linux-generic/include/odp_config_internal.h @@ -124,6 +124,16 @@ extern "C" { */ #define CONFIG_POOL_CACHE_SIZE 256
+/* + * Size of the virtual address space pre-reserver for ISHM + * + * This is just virtual space preallocation size, not memory allocation. + * This address space is used by ISHM to map things at a common address in + * all ODP threads (when the _ODP_ISHM_SINGLE_VA flag is used). + * In bytes. + */ +#define ODP_CONFIG_ISHM_VA_PREALLOC_SZ (536870912L) + #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_internal.h b/platform/linux-generic/include/odp_internal.h index 363dc6f..5698fb0 100644 --- a/platform/linux-generic/include/odp_internal.h +++ b/platform/linux-generic/include/odp_internal.h @@ -58,6 +58,7 @@ enum init_stage { TIME_INIT, SYSINFO_INIT, FDSERVER_INIT, + ISHM_INIT, SHM_INIT, THREAD_INIT, POOL_INIT, @@ -126,6 +127,11 @@ int _odp_int_name_tbl_term_global(void); int _odp_fdserver_init_global(void); int _odp_fdserver_term_global(void);
+int _odp_ishm_init_global(void); +int _odp_ishm_init_local(void); +int _odp_ishm_term_global(void); +int _odp_ishm_term_local(void); + int cpuinfo_parser(FILE *file, system_info_t *sysinfo); uint64_t odp_cpu_hz_current(int id);
diff --git a/platform/linux-generic/odp_init.c b/platform/linux-generic/odp_init.c index a078533..43d9e40 100644 --- a/platform/linux-generic/odp_init.c +++ b/platform/linux-generic/odp_init.c @@ -114,6 +114,12 @@ int odp_init_global(odp_instance_t *instance, } stage = FDSERVER_INIT;
+ if (_odp_ishm_init_global()) { + ODP_ERR("ODP ishm init failed.\n"); + goto init_failed; + } + stage = ISHM_INIT; + if (odp_shm_init_global()) { ODP_ERR("ODP shm init failed.\n"); goto init_failed; @@ -280,6 +286,13 @@ int _odp_term_global(enum init_stage stage) } /* Fall through */
+ case ISHM_INIT: + if (_odp_ishm_term_global()) { + ODP_ERR("ODP ishm term failed.\n"); + rc = -1; + } + /* Fall through */ + case FDSERVER_INIT: if (_odp_fdserver_term_global()) { ODP_ERR("ODP fdserver term failed.\n"); @@ -324,6 +337,12 @@ int odp_init_local(odp_instance_t instance, odp_thread_type_t thr_type) goto init_fail; }
+ if (_odp_ishm_init_local()) { + ODP_ERR("ODP ishm local init failed.\n"); + goto init_fail; + } + stage = ISHM_INIT; + if (odp_shm_init_local()) { ODP_ERR("ODP shm local init failed.\n"); goto init_fail; @@ -399,6 +418,13 @@ int _odp_term_local(enum init_stage stage) } /* Fall through */
+ case ISHM_INIT: + if (_odp_ishm_term_local()) { + ODP_ERR("ODP ishm local term failed.\n"); + rc = -1; + } + /* Fall through */ + default: break; }
commit ba203281cfd10b88a5d5b8f143ea34d14d373b58 Author: Maxim Uvarov maxim.uvarov@linaro.org Date: Fri Dec 16 17:27:35 2016 +0300
linux-gen: pktio ipc: fix clang build
clang is more clever on setting and not using variables, so it traps compilation. Also buffers header almost everywhere reference by pointer so size of it should not impact on performance.
Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org
diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 903f0a7..4cc51d3 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -64,11 +64,10 @@ struct odp_buffer_hdr_t { struct { void *hdr; uint8_t *data; -#ifdef _ODP_PKTIO_IPC - /* ipc mapped process can not walk over pointers, - * offset has to be used */ + /* Used only if _ODP_PKTIO_IPC is set. + * ipc mapped process can not walk over pointers, + * offset has to be used */ uint64_t ipc_data_offset; -#endif uint32_t len; } seg[CONFIG_PACKET_MAX_SEGS];
diff --git a/platform/linux-generic/pktio/ipc.c b/platform/linux-generic/pktio/ipc.c index 5f26b56..c9df043 100644 --- a/platform/linux-generic/pktio/ipc.c +++ b/platform/linux-generic/pktio/ipc.c @@ -459,12 +459,7 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, if (odp_unlikely(pool == ODP_POOL_INVALID)) ODP_ABORT("invalid pool");
-#ifdef _ODP_PKTIO_IPC data_pool_off = phdr->buf_hdr.seg[0].ipc_data_offset; -#else - /* compile all function code even if ipc disabled with config */ - data_pool_off = 0; -#endif
pkt = odp_packet_alloc(pool, phdr->frame_len); if (odp_unlikely(pkt == ODP_PACKET_INVALID)) { @@ -590,7 +585,6 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, data_pool_off = (uint8_t *)pkt_hdr->buf_hdr.seg[0].data - (uint8_t *)odp_shm_addr(pool->shm);
-#ifdef _ODP_PKTIO_IPC /* compile all function code even if ipc disabled with config */ pkt_hdr->buf_hdr.seg[0].ipc_data_offset = data_pool_off; IPC_ODP_DBG("%d/%d send packet %llx, pool %llx," @@ -598,7 +592,6 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, i, len, odp_packet_to_u64(pkt), odp_pool_to_u64(pool_hdl), pkt_hdr, pkt_hdr->buf_hdr.seg[0].ipc_data_offset); -#endif }
/* Put packets to ring to be processed by other process. */
commit 527ee67cb434e5e7c8015fa8c7d15f2ac25b1d20 Author: Maxim Uvarov maxim.uvarov@linaro.org Date: Wed Dec 14 22:57:57 2016 +0300
linux-gen: pktio ipc: tests: remove comment about master-slave
Implementation take care which process to name master and which slave. Comment is useless.
Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org
diff --git a/test/linux-generic/pktio_ipc/pktio_ipc1.c b/test/linux-generic/pktio_ipc/pktio_ipc1.c index 838b672..705c205 100644 --- a/test/linux-generic/pktio_ipc/pktio_ipc1.c +++ b/test/linux-generic/pktio_ipc/pktio_ipc1.c @@ -52,9 +52,6 @@ static int pktio_run_loop(odp_pool_t pool) start_cycle = odp_time_local(); current_cycle = start_cycle;
- /* slave process should always be run after master process to be - * able to create the same pktio. - */ for (;;) { if (run_time_sec) { cycle = odp_time_local(); diff --git a/test/linux-generic/pktio_ipc/pktio_ipc2.c b/test/linux-generic/pktio_ipc/pktio_ipc2.c index fb6f994..daf3841 100644 --- a/test/linux-generic/pktio_ipc/pktio_ipc2.c +++ b/test/linux-generic/pktio_ipc/pktio_ipc2.c @@ -49,9 +49,6 @@ static int ipc_second_process(int master_pid) wait = odp_time_local_from_ns(run_time_sec * ODP_TIME_SEC_IN_NS); start_cycle = odp_time_local();
- /* slave process should always be run after master process to be - * able to create the same pktio. - */ for (;;) { /* exit loop if time specified */ if (run_time_sec) {
commit 101e8188088b91e8d85e0fef0d6674dae05c306e Author: Maxim Uvarov maxim.uvarov@linaro.org Date: Wed Dec 14 22:57:56 2016 +0300
linux-gen: pktio ipc: make it work again
Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org
diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 2064f7c..903f0a7 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -64,6 +64,11 @@ struct odp_buffer_hdr_t { struct { void *hdr; uint8_t *data; +#ifdef _ODP_PKTIO_IPC + /* ipc mapped process can not walk over pointers, + * offset has to be used */ + uint64_t ipc_data_offset; +#endif uint32_t len; } seg[CONFIG_PACKET_MAX_SEGS];
@@ -94,11 +99,6 @@ struct odp_buffer_hdr_t { uint32_t uarea_size; /* size of user area */ uint32_t segcount; /* segment count */ uint32_t segsize; /* segment size */ -#ifdef _ODP_PKTIO_IPC - /* ipc mapped process can not walk over pointers, - * offset has to be used */ - uint64_t ipc_addr_offset[ODP_CONFIG_PACKET_MAX_SEGS]; -#endif
/* Data or next header */ uint8_t data[0]; diff --git a/platform/linux-generic/include/odp_internal.h b/platform/linux-generic/include/odp_internal.h index 6063b0f..363dc6f 100644 --- a/platform/linux-generic/include/odp_internal.h +++ b/platform/linux-generic/include/odp_internal.h @@ -50,7 +50,6 @@ struct odp_global_data_s { odp_cpumask_t control_cpus; odp_cpumask_t worker_cpus; int num_cpus_installed; - int ipc_ns; };
enum init_stage { diff --git a/platform/linux-generic/include/odp_packet_io_internal.h b/platform/linux-generic/include/odp_packet_io_internal.h index bdf6316..2001c42 100644 --- a/platform/linux-generic/include/odp_packet_io_internal.h +++ b/platform/linux-generic/include/odp_packet_io_internal.h @@ -102,6 +102,8 @@ typedef struct { packet, 0 - not yet ready */ void *pinfo; odp_shm_t pinfo_shm; + odp_shm_t remote_pool_shm; /**< shm of remote pool get with + _ipc_map_remote_pool() */ } _ipc_pktio_t;
struct pktio_entry { diff --git a/platform/linux-generic/include/odp_packet_io_ipc_internal.h b/platform/linux-generic/include/odp_packet_io_ipc_internal.h index 851114d..7cd2948 100644 --- a/platform/linux-generic/include/odp_packet_io_ipc_internal.h +++ b/platform/linux-generic/include/odp_packet_io_ipc_internal.h @@ -26,22 +26,31 @@ */ struct pktio_info { struct { - /* number of buffer in remote pool */ - int shm_pool_bufs_num; - /* size of remote pool */ - size_t shm_pkt_pool_size; + /* number of buffer*/ + int num; /* size of packet/segment in remote pool */ - uint32_t shm_pkt_size; + uint32_t block_size; /* offset from shared memory block start - * to pool_mdata_addr (odp-linux pool specific) */ - size_t mdata_offset; + * to pool *base_addr in remote process. + * (odp-linux pool specific) */ + size_t base_addr_offset; char pool_name[ODP_POOL_NAME_LEN]; + /* 1 if master finished creation of all shared objects */ + int init_done; } master; struct { /* offset from shared memory block start - * to pool_mdata_addr in remote process. + * to pool *base_addr in remote process. * (odp-linux pool specific) */ - size_t mdata_offset; + size_t base_addr_offset; + void *base_addr; + uint32_t block_size; char pool_name[ODP_POOL_NAME_LEN]; + /* pid of the slave process written to shm and + * used by master to look up memory created by + * slave + */ + int pid; + int init_done; } slave; } ODP_PACKED; diff --git a/platform/linux-generic/odp_init.c b/platform/linux-generic/odp_init.c index fb85cc1..a078533 100644 --- a/platform/linux-generic/odp_init.c +++ b/platform/linux-generic/odp_init.c @@ -67,7 +67,7 @@ static int cleanup_files(const char *dirpath, int odp_pid)
int odp_init_global(odp_instance_t *instance, const odp_init_t *params, - const odp_platform_init_t *platform_params) + const odp_platform_init_t *platform_params ODP_UNUSED) { char *hpdir;
@@ -75,9 +75,6 @@ int odp_init_global(odp_instance_t *instance, odp_global_data.main_pid = getpid(); cleanup_files(_ODP_TMPDIR, odp_global_data.main_pid);
- if (platform_params) - odp_global_data.ipc_ns = platform_params->ipc_ns; - enum init_stage stage = NO_INIT; odp_global_data.log_fn = odp_override_log; odp_global_data.abort_fn = odp_override_abort; diff --git a/platform/linux-generic/pktio/ipc.c b/platform/linux-generic/pktio/ipc.c index 0e99c6e..5f26b56 100644 --- a/platform/linux-generic/pktio/ipc.c +++ b/platform/linux-generic/pktio/ipc.c @@ -3,150 +3,85 @@ * * SPDX-License-Identifier: BSD-3-Clause */ -#ifdef _ODP_PKTIO_IPC #include <odp_packet_io_ipc_internal.h> #include <odp_debug_internal.h> #include <odp_packet_io_internal.h> #include <odp/api/system_info.h> #include <odp_shm_internal.h> +#include <_ishm_internal.h>
#include <sys/mman.h> #include <sys/stat.h> #include <fcntl.h>
+#define IPC_ODP_DEBUG_PRINT 0 + +#define IPC_ODP_DBG(fmt, ...) \ + do { \ + if (IPC_ODP_DEBUG_PRINT == 1) \ + ODP_DBG(fmt, ##__VA_ARGS__);\ + } while (0) + /* MAC address for the "ipc" interface */ static const char pktio_ipc_mac[] = {0x12, 0x12, 0x12, 0x12, 0x12, 0x12};
-static void *_ipc_map_remote_pool(const char *name, size_t size); +static odp_shm_t _ipc_map_remote_pool(const char *name, int pid);
static const char *_ipc_odp_buffer_pool_shm_name(odp_pool_t pool_hdl) { - pool_entry_t *pool; - uint32_t pool_id; + pool_t *pool; odp_shm_t shm; odp_shm_info_t info;
- pool_id = pool_handle_to_index(pool_hdl); - pool = get_pool_entry(pool_id); - shm = pool->s.pool_shm; + pool = pool_entry_from_hdl(pool_hdl); + shm = pool->shm;
odp_shm_info(shm, &info);
return info.name; }
-/** -* Look up for shared memory object. -* -* @param name name of shm object -* -* @return 0 on success, otherwise non-zero -*/ -static int _ipc_shm_lookup(const char *name) -{ - int shm; - char shm_devname[SHM_DEVNAME_MAXLEN]; - - if (!odp_global_data.ipc_ns) - ODP_ABORT("ipc_ns not set\n"); - - snprintf(shm_devname, SHM_DEVNAME_MAXLEN, - SHM_DEVNAME_FORMAT, - odp_global_data.ipc_ns, name); - - shm = shm_open(shm_devname, O_RDWR, S_IRUSR | S_IWUSR); - if (shm == -1) { - if (errno == ENOENT) { - ODP_DBG("no file %s\n", shm_devname); - return -1; - } - ODP_ABORT("shm_open for %s err %s\n", - shm_devname, strerror(errno)); - } - close(shm); - return 0; -} - -static int _ipc_map_pktio_info(pktio_entry_t *pktio_entry, - const char *dev, - int *slave) -{ - struct pktio_info *pinfo; - char name[ODP_POOL_NAME_LEN + sizeof("_info")]; - uint32_t flags; - odp_shm_t shm; - - /* Create info about remote pktio */ - snprintf(name, sizeof(name), "%s_info", dev); - - flags = ODP_SHM_PROC | _ODP_SHM_O_EXCL; - - shm = odp_shm_reserve(name, sizeof(struct pktio_info), - ODP_CACHE_LINE_SIZE, - flags); - if (ODP_SHM_INVALID != shm) { - pinfo = odp_shm_addr(shm); - pinfo->master.pool_name[0] = 0; - *slave = 0; - } else { - flags = _ODP_SHM_PROC_NOCREAT | _ODP_SHM_O_EXCL; - shm = odp_shm_reserve(name, sizeof(struct pktio_info), - ODP_CACHE_LINE_SIZE, - flags); - if (ODP_SHM_INVALID == shm) - ODP_ABORT("can not connect to shm\n"); - - pinfo = odp_shm_addr(shm); - *slave = 1; - } - - pktio_entry->s.ipc.pinfo = pinfo; - pktio_entry->s.ipc.pinfo_shm = shm; - - return 0; -} - static int _ipc_master_start(pktio_entry_t *pktio_entry) { struct pktio_info *pinfo = pktio_entry->s.ipc.pinfo; - int ret; - void *ipc_pool_base; + odp_shm_t shm;
- if (pinfo->slave.mdata_offset == 0) + if (pinfo->slave.init_done == 0) return -1;
- ret = _ipc_shm_lookup(pinfo->slave.pool_name); - if (ret) { - ODP_DBG("no pool file %s\n", pinfo->slave.pool_name); + shm = _ipc_map_remote_pool(pinfo->slave.pool_name, + pinfo->slave.pid); + if (shm == ODP_SHM_INVALID) { + ODP_DBG("no pool file %s for pid %d\n", + pinfo->slave.pool_name, pinfo->slave.pid); return -1; }
- ipc_pool_base = _ipc_map_remote_pool(pinfo->slave.pool_name, - pinfo->master.shm_pkt_pool_size); - pktio_entry->s.ipc.pool_mdata_base = (char *)ipc_pool_base + - pinfo->slave.mdata_offset; + pktio_entry->s.ipc.remote_pool_shm = shm; + pktio_entry->s.ipc.pool_base = odp_shm_addr(shm); + pktio_entry->s.ipc.pool_mdata_base = (char *)odp_shm_addr(shm) + + pinfo->slave.base_addr_offset;
odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 1);
- ODP_DBG("%s started.\n", pktio_entry->s.name); + IPC_ODP_DBG("%s started.\n", pktio_entry->s.name); return 0; }
static int _ipc_init_master(pktio_entry_t *pktio_entry, const char *dev, - odp_pool_t pool) + odp_pool_t pool_hdl) { char ipc_shm_name[ODP_POOL_NAME_LEN + sizeof("_m_prod")]; - pool_entry_t *pool_entry; - uint32_t pool_id; + pool_t *pool; struct pktio_info *pinfo; const char *pool_name;
- pool_id = pool_handle_to_index(pool); - pool_entry = get_pool_entry(pool_id); + pool = pool_entry_from_hdl(pool_hdl); + (void)pool;
if (strlen(dev) > (ODP_POOL_NAME_LEN - sizeof("_m_prod"))) { - ODP_DBG("too big ipc name\n"); + ODP_ERR("too big ipc name\n"); return -1; }
@@ -158,7 +93,7 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); if (!pktio_entry->s.ipc.tx.send) { - ODP_DBG("pid %d unable to create ipc ring %s name\n", + ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); return -1; } @@ -174,7 +109,7 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); if (!pktio_entry->s.ipc.tx.free) { - ODP_DBG("pid %d unable to create ipc ring %s name\n", + ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_m_prod; } @@ -187,7 +122,7 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); if (!pktio_entry->s.ipc.rx.recv) { - ODP_DBG("pid %d unable to create ipc ring %s name\n", + ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_m_cons; } @@ -200,7 +135,7 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry, PKTIO_IPC_ENTRIES, _RING_SHM_PROC | _RING_NO_LIST); if (!pktio_entry->s.ipc.rx.free) { - ODP_DBG("pid %d unable to create ipc ring %s name\n", + ODP_ERR("pid %d unable to create ipc ring %s name\n", getpid(), ipc_shm_name); goto free_s_prod; } @@ -210,24 +145,23 @@ static int _ipc_init_master(pktio_entry_t *pktio_entry,
/* Set up pool name for remote info */ pinfo = pktio_entry->s.ipc.pinfo; - pool_name = _ipc_odp_buffer_pool_shm_name(pool); + pool_name = _ipc_odp_buffer_pool_shm_name(pool_hdl); if (strlen(pool_name) > ODP_POOL_NAME_LEN) { - ODP_DBG("pid %d ipc pool name %s is too big %d\n", + ODP_ERR("pid %d ipc pool name %s is too big %d\n", getpid(), pool_name, strlen(pool_name)); goto free_s_prod; }
memcpy(pinfo->master.pool_name, pool_name, strlen(pool_name)); - pinfo->master.shm_pkt_pool_size = pool_entry->s.pool_size; - pinfo->master.shm_pool_bufs_num = pool_entry->s.buf_num; - pinfo->master.shm_pkt_size = pool_entry->s.seg_size; - pinfo->master.mdata_offset = pool_entry->s.pool_mdata_addr - - pool_entry->s.pool_base_addr; - pinfo->slave.mdata_offset = 0; + pinfo->slave.base_addr_offset = 0; + pinfo->slave.base_addr = 0; + pinfo->slave.pid = 0; + pinfo->slave.init_done = 0;
- pktio_entry->s.ipc.pool = pool; + pktio_entry->s.ipc.pool = pool_hdl;
ODP_DBG("Pre init... DONE.\n"); + pinfo->master.init_done = 1;
_ipc_master_start(pktio_entry);
@@ -246,55 +180,42 @@ free_m_prod: }
static void _ipc_export_pool(struct pktio_info *pinfo, - odp_pool_t pool) + odp_pool_t pool_hdl) { - pool_entry_t *pool_entry; - - pool_entry = odp_pool_to_entry(pool); - if (pool_entry->s.blk_size != pinfo->master.shm_pkt_size) - ODP_ABORT("pktio for same name should have the same pool size\n"); - if (pool_entry->s.buf_num != (unsigned)pinfo->master.shm_pool_bufs_num) - ODP_ABORT("pktio for same name should have the same pool size\n"); + pool_t *pool = pool_entry_from_hdl(pool_hdl);
snprintf(pinfo->slave.pool_name, ODP_POOL_NAME_LEN, "%s", - pool_entry->s.name); - pinfo->slave.mdata_offset = pool_entry->s.pool_mdata_addr - - pool_entry->s.pool_base_addr; + _ipc_odp_buffer_pool_shm_name(pool_hdl)); + pinfo->slave.pid = odp_global_data.main_pid; + pinfo->slave.block_size = pool->block_size; + pinfo->slave.base_addr = pool->base_addr; }
-static void *_ipc_map_remote_pool(const char *name, size_t size) +static odp_shm_t _ipc_map_remote_pool(const char *name, int pid) { odp_shm_t shm; - void *addr; - - ODP_DBG("Mapping remote pool %s, size %ld\n", name, size); - shm = odp_shm_reserve(name, - size, - ODP_CACHE_LINE_SIZE, - _ODP_SHM_PROC_NOCREAT); - if (shm == ODP_SHM_INVALID) - ODP_ABORT("unable map %s\n", name); - - addr = odp_shm_addr(shm); - ODP_DBG("MAP master: %p - %p size %ld, pool %s\n", - addr, (char *)addr + size, size, name); - return addr; + char rname[ODP_SHM_NAME_LEN]; + + snprintf(rname, ODP_SHM_NAME_LEN, "remote-%s", name); + shm = odp_shm_import(name, pid, rname); + if (shm == ODP_SHM_INVALID) { + ODP_ERR("unable map %s\n", name); + return ODP_SHM_INVALID; + } + + IPC_ODP_DBG("Mapped remote pool %s to local %s\n", name, rname); + return shm; }
-static void *_ipc_shm_map(char *name, size_t size) +static void *_ipc_shm_map(char *name, int pid) { odp_shm_t shm; - int ret;
- ret = _ipc_shm_lookup(name); - if (ret == -1) + shm = odp_shm_import(name, pid, name); + if (ODP_SHM_INVALID == shm) { + ODP_ERR("unable to map: %s\n", name); return NULL; - - shm = odp_shm_reserve(name, size, - ODP_CACHE_LINE_SIZE, - _ODP_SHM_PROC_NOCREAT); - if (ODP_SHM_INVALID == shm) - ODP_ABORT("unable to map: %s\n", name); + }
return odp_shm_addr(shm); } @@ -313,15 +234,21 @@ static int _ipc_init_slave(const char *dev, static int _ipc_slave_start(pktio_entry_t *pktio_entry) { char ipc_shm_name[ODP_POOL_NAME_LEN + sizeof("_slave_r")]; - size_t ring_size = PKTIO_IPC_ENTRIES * sizeof(void *) + - sizeof(_ring_t); struct pktio_info *pinfo; - void *ipc_pool_base; odp_shm_t shm; - const char *dev = pktio_entry->s.name; + char tail[ODP_POOL_NAME_LEN]; + char dev[ODP_POOL_NAME_LEN]; + int pid; + + if (sscanf(pktio_entry->s.name, "ipc:%d:%s", &pid, tail) != 2) { + ODP_ERR("wrong pktio name\n"); + return -1; + } + + sprintf(dev, "ipc:%s", tail);
snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", dev); - pktio_entry->s.ipc.rx.recv = _ipc_shm_map(ipc_shm_name, ring_size); + pktio_entry->s.ipc.rx.recv = _ipc_shm_map(ipc_shm_name, pid); if (!pktio_entry->s.ipc.rx.recv) { ODP_DBG("pid %d unable to find ipc ring %s name\n", getpid(), dev); @@ -333,9 +260,9 @@ static int _ipc_slave_start(pktio_entry_t *pktio_entry) _ring_free_count(pktio_entry->s.ipc.rx.recv));
snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", dev); - pktio_entry->s.ipc.rx.free = _ipc_shm_map(ipc_shm_name, ring_size); + pktio_entry->s.ipc.rx.free = _ipc_shm_map(ipc_shm_name, pid); if (!pktio_entry->s.ipc.rx.free) { - ODP_DBG("pid %d unable to find ipc ring %s name\n", + ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_m_prod; } @@ -344,9 +271,9 @@ static int _ipc_slave_start(pktio_entry_t *pktio_entry) _ring_free_count(pktio_entry->s.ipc.rx.free));
snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", dev); - pktio_entry->s.ipc.tx.send = _ipc_shm_map(ipc_shm_name, ring_size); + pktio_entry->s.ipc.tx.send = _ipc_shm_map(ipc_shm_name, pid); if (!pktio_entry->s.ipc.tx.send) { - ODP_DBG("pid %d unable to find ipc ring %s name\n", + ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_m_cons; } @@ -355,9 +282,9 @@ static int _ipc_slave_start(pktio_entry_t *pktio_entry) _ring_free_count(pktio_entry->s.ipc.tx.send));
snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", dev); - pktio_entry->s.ipc.tx.free = _ipc_shm_map(ipc_shm_name, ring_size); + pktio_entry->s.ipc.tx.free = _ipc_shm_map(ipc_shm_name, pid); if (!pktio_entry->s.ipc.tx.free) { - ODP_DBG("pid %d unable to find ipc ring %s name\n", + ODP_ERR("pid %d unable to find ipc ring %s name\n", getpid(), dev); goto free_s_prod; } @@ -367,15 +294,17 @@ static int _ipc_slave_start(pktio_entry_t *pktio_entry)
/* Get info about remote pool */ pinfo = pktio_entry->s.ipc.pinfo; - ipc_pool_base = _ipc_map_remote_pool(pinfo->master.pool_name, - pinfo->master.shm_pkt_pool_size); - pktio_entry->s.ipc.pool_mdata_base = (char *)ipc_pool_base + - pinfo->master.mdata_offset; - pktio_entry->s.ipc.pkt_size = pinfo->master.shm_pkt_size; + shm = _ipc_map_remote_pool(pinfo->master.pool_name, + pid); + pktio_entry->s.ipc.remote_pool_shm = shm; + pktio_entry->s.ipc.pool_mdata_base = (char *)odp_shm_addr(shm) + + pinfo->master.base_addr_offset; + pktio_entry->s.ipc.pkt_size = pinfo->master.block_size;
_ipc_export_pool(pinfo, pktio_entry->s.ipc.pool);
odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 1); + pinfo->slave.init_done = 1;
ODP_DBG("%s started.\n", pktio_entry->s.name); return 0; @@ -401,7 +330,11 @@ static int ipc_pktio_open(odp_pktio_t id ODP_UNUSED, odp_pool_t pool) { int ret = -1; - int slave; + int pid ODP_UNUSED; + struct pktio_info *pinfo; + char name[ODP_POOL_NAME_LEN + sizeof("_info")]; + char tail[ODP_POOL_NAME_LEN]; + odp_shm_t shm;
ODP_STATIC_ASSERT(ODP_POOL_NAME_LEN == _RING_NAMESIZE, "mismatch pool and ring name arrays"); @@ -411,65 +344,59 @@ static int ipc_pktio_open(odp_pktio_t id ODP_UNUSED,
odp_atomic_init_u32(&pktio_entry->s.ipc.ready, 0);
- _ipc_map_pktio_info(pktio_entry, dev, &slave); - pktio_entry->s.ipc.type = (slave == 0) ? PKTIO_TYPE_IPC_MASTER : - PKTIO_TYPE_IPC_SLAVE; + /* Shared info about remote pktio */ + if (sscanf(dev, "ipc:%d:%s", &pid, tail) == 2) { + pktio_entry->s.ipc.type = PKTIO_TYPE_IPC_SLAVE;
- if (pktio_entry->s.ipc.type == PKTIO_TYPE_IPC_MASTER) { + snprintf(name, sizeof(name), "ipc:%s_info", tail); + IPC_ODP_DBG("lookup for name %s for pid %d\n", name, pid); + shm = odp_shm_import(name, pid, name); + if (ODP_SHM_INVALID == shm) + return -1; + pinfo = odp_shm_addr(shm); + + if (!pinfo->master.init_done) { + odp_shm_free(shm); + return -1; + } + pktio_entry->s.ipc.pinfo = pinfo; + pktio_entry->s.ipc.pinfo_shm = shm; + ODP_DBG("process %d is slave\n", getpid()); + ret = _ipc_init_slave(name, pktio_entry, pool); + } else { + pktio_entry->s.ipc.type = PKTIO_TYPE_IPC_MASTER; + snprintf(name, sizeof(name), "%s_info", dev); + shm = odp_shm_reserve(name, sizeof(struct pktio_info), + ODP_CACHE_LINE_SIZE, + _ODP_ISHM_EXPORT | _ODP_ISHM_LOCK); + if (ODP_SHM_INVALID == shm) { + ODP_ERR("can not create shm %s\n", name); + return -1; + } + + pinfo = odp_shm_addr(shm); + pinfo->master.init_done = 0; + pinfo->master.pool_name[0] = 0; + pktio_entry->s.ipc.pinfo = pinfo; + pktio_entry->s.ipc.pinfo_shm = shm; ODP_DBG("process %d is master\n", getpid()); ret = _ipc_init_master(pktio_entry, dev, pool); - } else { - ODP_DBG("process %d is slave\n", getpid()); - ret = _ipc_init_slave(dev, pktio_entry, pool); }
return ret; }
-static inline void *_ipc_buffer_map(odp_buffer_hdr_t *buf, - uint32_t offset, - uint32_t *seglen, - uint32_t limit) +static void _ipc_free_ring_packets(pktio_entry_t *pktio_entry, _ring_t *r) { - int seg_index = offset / buf->segsize; - int seg_offset = offset % buf->segsize; -#ifdef _ODP_PKTIO_IPC - void *addr = (char *)buf - buf->ipc_addr_offset[seg_index]; -#else - /** buf_hdr.ipc_addr_offset defined only when ipc is - * enabled. */ - void *addr = NULL; - - (void)seg_index; -#endif - if (seglen) { - uint32_t buf_left = limit - offset; - *seglen = seg_offset + buf_left <= buf->segsize ? - buf_left : buf->segsize - seg_offset; - } - - return (void *)(seg_offset + (uint8_t *)addr); -} - -static inline void *_ipc_packet_map(odp_packet_hdr_t *pkt_hdr, - uint32_t offset, uint32_t *seglen) -{ - if (offset > pkt_hdr->frame_len) - return NULL; - - return _ipc_buffer_map(&pkt_hdr->buf_hdr, - pkt_hdr->headroom + offset, seglen, - pkt_hdr->headroom + pkt_hdr->frame_len); -} - -static void _ipc_free_ring_packets(_ring_t *r) -{ - odp_packet_t r_p_pkts[PKTIO_IPC_ENTRIES]; + uintptr_t offsets[PKTIO_IPC_ENTRIES]; int ret; void **rbuf_p; int i;
- rbuf_p = (void *)&r_p_pkts; + if (!r) + return; + + rbuf_p = (void *)&offsets;
while (1) { ret = _ring_mc_dequeue_burst(r, rbuf_p, @@ -477,8 +404,13 @@ static void _ipc_free_ring_packets(_ring_t *r) if (0 == ret) break; for (i = 0; i < ret; i++) { - if (r_p_pkts[i] != ODP_PACKET_INVALID) - odp_packet_free(r_p_pkts[i]); + odp_packet_hdr_t *phdr; + odp_packet_t pkt; + void *mbase = pktio_entry->s.ipc.pool_mdata_base; + + phdr = (void *)((uint8_t *)mbase + offsets[i]); + pkt = (odp_packet_t)phdr->buf_hdr.handle.handle; + odp_packet_free(pkt); } } } @@ -490,22 +422,23 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, int i; _ring_t *r; _ring_t *r_p; + uintptr_t offsets[PKTIO_IPC_ENTRIES]; + void **ipcbufs_p = (void *)&offsets; + uint32_t ready; + int pkts_ring;
- odp_packet_t remote_pkts[PKTIO_IPC_ENTRIES]; - void **ipcbufs_p = (void *)&remote_pkts; - uint32_t ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); - + ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); if (odp_unlikely(!ready)) { - ODP_DBG("start pktio is missing before usage?\n"); - return -1; + IPC_ODP_DBG("start pktio is missing before usage?\n"); + return 0; }
- _ipc_free_ring_packets(pktio_entry->s.ipc.tx.free); + _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free);
r = pktio_entry->s.ipc.rx.recv; pkts = _ring_mc_dequeue_burst(r, ipcbufs_p, len); if (odp_unlikely(pkts < 0)) - ODP_ABORT("error to dequeue no packets\n"); + ODP_ABORT("internal error dequeue\n");
/* fast path */ if (odp_likely(0 == pkts)) @@ -514,36 +447,26 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, for (i = 0; i < pkts; i++) { odp_pool_t pool; odp_packet_t pkt; - odp_packet_hdr_t phdr; - void *ptr; - odp_buffer_bits_t handle; - int idx; /* Remote packet has coded pool and index. - * We need only index.*/ + odp_packet_hdr_t *phdr; void *pkt_data; - void *remote_pkt_data; + uint64_t data_pool_off; + void *rmt_data_ptr;
- if (remote_pkts[i] == ODP_PACKET_INVALID) - continue; + phdr = (void *)((uint8_t *)pktio_entry->s.ipc.pool_mdata_base + + offsets[i]);
- handle.handle = _odp_packet_to_buffer(remote_pkts[i]); - idx = handle.index; - - /* Link to packed data. To this line we have Zero-Copy between - * processes, to simplify use packet copy in that version which - * can be removed later with more advance buffer management - * (ref counters). - */ - /* reverse odp_buf_to_hdr() */ - ptr = (char *)pktio_entry->s.ipc.pool_mdata_base + - (idx * ODP_CACHE_LINE_SIZE); - memcpy(&phdr, ptr, sizeof(odp_packet_hdr_t)); - - /* Allocate new packet. Select*/ pool = pktio_entry->s.ipc.pool; if (odp_unlikely(pool == ODP_POOL_INVALID)) ODP_ABORT("invalid pool");
- pkt = odp_packet_alloc(pool, phdr.frame_len); +#ifdef _ODP_PKTIO_IPC + data_pool_off = phdr->buf_hdr.seg[0].ipc_data_offset; +#else + /* compile all function code even if ipc disabled with config */ + data_pool_off = 0; +#endif + + pkt = odp_packet_alloc(pool, phdr->frame_len); if (odp_unlikely(pkt == ODP_PACKET_INVALID)) { /* Original pool might be smaller then * PKTIO_IPC_ENTRIES. If packet can not be @@ -562,30 +485,40 @@ static int ipc_pktio_recv_lockless(pktio_entry_t *pktio_entry, (PKTIO_TYPE_IPC_SLAVE == pktio_entry->s.ipc.type));
- remote_pkt_data = _ipc_packet_map(ptr, 0, NULL); - if (odp_unlikely(!remote_pkt_data)) - ODP_ABORT("unable to map remote_pkt_data, ipc_slave %d\n", - (PKTIO_TYPE_IPC_SLAVE == - pktio_entry->s.ipc.type)); - /* Copy packet data from shared pool to local pool. */ - memcpy(pkt_data, remote_pkt_data, phdr.frame_len); + rmt_data_ptr = (uint8_t *)pktio_entry->s.ipc.pool_mdata_base + + data_pool_off; + memcpy(pkt_data, rmt_data_ptr, phdr->frame_len);
/* Copy packets L2, L3 parsed offsets and size */ - copy_packet_cls_metadata(&phdr, odp_packet_hdr(pkt)); + copy_packet_cls_metadata(phdr, odp_packet_hdr(pkt)); + + odp_packet_hdr(pkt)->frame_len = phdr->frame_len; + odp_packet_hdr(pkt)->headroom = phdr->headroom; + odp_packet_hdr(pkt)->tailroom = phdr->tailroom; + + /* Take classification fields */ + odp_packet_hdr(pkt)->p = phdr->p;
- odp_packet_hdr(pkt)->frame_len = phdr.frame_len; - odp_packet_hdr(pkt)->headroom = phdr.headroom; - odp_packet_hdr(pkt)->tailroom = phdr.tailroom; - odp_packet_hdr(pkt)->input = pktio_entry->s.handle; pkt_table[i] = pkt; }
/* Now tell other process that we no longer need that buffers.*/ r_p = pktio_entry->s.ipc.rx.free; - pkts = _ring_mp_enqueue_burst(r_p, ipcbufs_p, i); + +repeat: + pkts_ring = _ring_mp_enqueue_burst(r_p, ipcbufs_p, pkts); if (odp_unlikely(pkts < 0)) ODP_ABORT("ipc: odp_ring_mp_enqueue_bulk r_p fail\n"); + if (odp_unlikely(pkts != pkts_ring)) { + IPC_ODP_DBG("odp_ring_full: %d, odp_ring_count %d," + " _ring_free_count %d\n", + _ring_full(r_p), _ring_count(r_p), + _ring_free_count(r_p)); + ipcbufs_p = (void *)&offsets[pkts_ring - 1]; + pkts = pkts - pkts_ring; + goto repeat; + }
return pkts; } @@ -614,26 +547,23 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, uint32_t ready = odp_atomic_load_u32(&pktio_entry->s.ipc.ready); odp_packet_t pkt_table_mapped[len]; /**< Ready to send packet has to be * in memory mapped pool. */ + uintptr_t offsets[len];
if (odp_unlikely(!ready)) return 0;
- _ipc_free_ring_packets(pktio_entry->s.ipc.tx.free); + _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free);
- /* Prepare packets: calculate offset from address. */ + /* Copy packets to shm shared pool if they are in different */ for (i = 0; i < len; i++) { - int j; odp_packet_t pkt = pkt_table[i]; - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + pool_t *ipc_pool = pool_entry_from_hdl(pktio_entry->s.ipc.pool); odp_buffer_bits_t handle; - uint32_t cur_mapped_pool_id = - pool_handle_to_index(pktio_entry->s.ipc.pool); - uint32_t pool_id; + uint32_t pkt_pool_id;
- /* do copy if packet was allocated from not mapped pool */ handle.handle = _odp_packet_to_buffer(pkt); - pool_id = handle.pool_id; - if (pool_id != cur_mapped_pool_id) { + pkt_pool_id = handle.pool_id; + if (pkt_pool_id != ipc_pool->pool_idx) { odp_packet_t newpkt;
newpkt = odp_packet_copy(pkt, pktio_entry->s.ipc.pool); @@ -645,24 +575,34 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, } else { pkt_table_mapped[i] = pkt; } + } + + /* Set offset to phdr for outgoing packets */ + for (i = 0; i < len; i++) { + uint64_t data_pool_off; + odp_packet_t pkt = pkt_table_mapped[i]; + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + odp_pool_t pool_hdl = odp_packet_pool(pkt); + pool_t *pool = pool_entry_from_hdl(pool_hdl); + + offsets[i] = (uint8_t *)pkt_hdr - + (uint8_t *)odp_shm_addr(pool->shm); + data_pool_off = (uint8_t *)pkt_hdr->buf_hdr.seg[0].data - + (uint8_t *)odp_shm_addr(pool->shm);
- /* buf_hdr.addr can not be used directly in remote process, - * convert it to offset - */ - for (j = 0; j < ODP_BUFFER_MAX_SEG; j++) { #ifdef _ODP_PKTIO_IPC - pkt_hdr->buf_hdr.ipc_addr_offset[j] = (char *)pkt_hdr - - (char *)pkt_hdr->buf_hdr.addr[j]; -#else - /** buf_hdr.ipc_addr_offset defined only when ipc is - * enabled. */ - (void)pkt_hdr; + /* compile all function code even if ipc disabled with config */ + pkt_hdr->buf_hdr.seg[0].ipc_data_offset = data_pool_off; + IPC_ODP_DBG("%d/%d send packet %llx, pool %llx," + "phdr = %p, offset %x\n", + i, len, + odp_packet_to_u64(pkt), odp_pool_to_u64(pool_hdl), + pkt_hdr, pkt_hdr->buf_hdr.seg[0].ipc_data_offset); #endif - } }
/* Put packets to ring to be processed by other process. */ - rbuf_p = (void *)&pkt_table_mapped[0]; + rbuf_p = (void *)&offsets[0]; r = pktio_entry->s.ipc.tx.send; ret = _ring_mp_enqueue_burst(r, rbuf_p, len); if (odp_unlikely(ret < 0)) { @@ -673,6 +613,7 @@ static int ipc_pktio_send_lockless(pktio_entry_t *pktio_entry, ODP_ERR("odp_ring_full: %d, odp_ring_count %d, _ring_free_count %d\n", _ring_full(r), _ring_count(r), _ring_free_count(r)); + ODP_ABORT("Unexpected!\n"); }
return ret; @@ -722,22 +663,25 @@ static int ipc_start(pktio_entry_t *pktio_entry)
static int ipc_stop(pktio_entry_t *pktio_entry) { - unsigned tx_send, tx_free; + unsigned tx_send = 0, tx_free = 0;
odp_atomic_store_u32(&pktio_entry->s.ipc.ready, 0);
- _ipc_free_ring_packets(pktio_entry->s.ipc.tx.send); + if (pktio_entry->s.ipc.tx.send) + _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.send); /* other process can transfer packets from one ring to * other, use delay here to free that packets. */ sleep(1); - _ipc_free_ring_packets(pktio_entry->s.ipc.tx.free); + if (pktio_entry->s.ipc.tx.free) + _ipc_free_ring_packets(pktio_entry, pktio_entry->s.ipc.tx.free);
- tx_send = _ring_count(pktio_entry->s.ipc.tx.send); - tx_free = _ring_count(pktio_entry->s.ipc.tx.free); + if (pktio_entry->s.ipc.tx.send) + tx_send = _ring_count(pktio_entry->s.ipc.tx.send); + if (pktio_entry->s.ipc.tx.free) + tx_free = _ring_count(pktio_entry->s.ipc.tx.free); if (tx_send | tx_free) { ODP_DBG("IPC rings: tx send %d tx free %d\n", - _ring_free_count(pktio_entry->s.ipc.tx.send), - _ring_free_count(pktio_entry->s.ipc.tx.free)); + tx_send, tx_free); }
return 0; @@ -747,23 +691,31 @@ static int ipc_close(pktio_entry_t *pktio_entry) { char ipc_shm_name[ODP_POOL_NAME_LEN + sizeof("_m_prod")]; char *dev = pktio_entry->s.name; + char name[ODP_POOL_NAME_LEN]; + char tail[ODP_POOL_NAME_LEN]; + int pid = 0;
ipc_stop(pktio_entry);
- if (pktio_entry->s.ipc.type == PKTIO_TYPE_IPC_MASTER) { - /* unlink this pktio info for both master and slave */ - odp_shm_free(pktio_entry->s.ipc.pinfo_shm); - - /* destroy rings */ - snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", dev); - _ring_destroy(ipc_shm_name); - snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", dev); - _ring_destroy(ipc_shm_name); - snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", dev); - _ring_destroy(ipc_shm_name); - snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", dev); - _ring_destroy(ipc_shm_name); - } + odp_shm_free(pktio_entry->s.ipc.remote_pool_shm); + + if (sscanf(dev, "ipc:%d:%s", &pid, tail) == 2) + snprintf(name, sizeof(name), "ipc:%s", tail); + else + snprintf(name, sizeof(name), "%s", dev); + + /* unlink this pktio info for both master and slave */ + odp_shm_free(pktio_entry->s.ipc.pinfo_shm); + + /* destroy rings */ + snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_cons", name); + _ring_destroy(ipc_shm_name); + snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_s_prod", name); + _ring_destroy(ipc_shm_name); + snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_cons", name); + _ring_destroy(ipc_shm_name); + snprintf(ipc_shm_name, sizeof(ipc_shm_name), "%s_m_prod", name); + _ring_destroy(ipc_shm_name);
return 0; } @@ -795,4 +747,3 @@ const pktio_if_ops_t ipc_pktio_ops = { .pktin_ts_from_ns = NULL, .config = NULL }; -#endif diff --git a/test/linux-generic/pktio_ipc/pktio_ipc_run.sh b/test/linux-generic/pktio_ipc/pktio_ipc_run.sh index 3cd28f5..52e8d42 100755 --- a/test/linux-generic/pktio_ipc/pktio_ipc_run.sh +++ b/test/linux-generic/pktio_ipc/pktio_ipc_run.sh @@ -25,19 +25,23 @@ run() rm -rf /tmp/odp-* 2>&1 > /dev/null
echo "==== run pktio_ipc1 then pktio_ipc2 ====" - pktio_ipc1${EXEEXT} -t 30 & + pktio_ipc1${EXEEXT} -t 10 & IPC_PID=$!
- pktio_ipc2${EXEEXT} -p ${IPC_PID} -t 10 + pktio_ipc2${EXEEXT} -p ${IPC_PID} -t 5 ret=$? # pktio_ipc1 should do clean up and exit just # after pktio_ipc2 exited. If it does not happen # kill him in test. - sleep 1 - kill ${IPC_PID} 2>&1 > /dev/null + sleep 13 + (kill ${IPC_PID} 2>&1 > /dev/null ) > /dev/null if [ $? -eq 0 ]; then - ls -l /tmp/odp* + echo "pktio_ipc1${EXEEXT} was killed" + ls -l /tmp/odp* 2> /dev/null rm -rf /tmp/odp-${IPC_PID}* 2>&1 > /dev/null + else + echo "normal exit of 2 application" + ls -l /tmp/odp* 2> /dev/null fi
if [ $ret -ne 0 ]; then @@ -47,21 +51,32 @@ run() echo "First stage PASSED" fi
- echo "==== run pktio_ipc2 then pktio_ipc1 ====" - pktio_ipc2${EXEEXT} -t 20 & + pktio_ipc2${EXEEXT} -t 10 & IPC_PID=$!
- pktio_ipc1${EXEEXT} -p ${IPC_PID} -t 10 + pktio_ipc1${EXEEXT} -p ${IPC_PID} -t 5 ret=$? - (kill ${IPC_PID} 2>&1 > /dev/null) > /dev/null || true + # pktio_ipc2 do not exit on pktio_ipc1 disconnect + # wait until it exits cleanly + sleep 13 + (kill ${IPC_PID} 2>&1 > /dev/null ) > /dev/null + if [ $? -eq 0 ]; then + echo "pktio_ipc2${EXEEXT} was killed" + ls -l /tmp/odp* 2> /dev/null + rm -rf /tmp/odp-${IPC_PID}* 2>&1 > /dev/null + else + echo "normal exit of 2 application" + ls -l /tmp/odp* 2> /dev/null + fi
if [ $ret -ne 0 ]; then echo "!!! FAILED !!!" - ls -l /tmp/odp* + ls -l /tmp/odp* 2> /dev/null rm -rf /tmp/odp-${IPC_PID}* 2>&1 > /dev/null exit $ret else + ls -l /tmp/odp* 2> /dev/null echo "Second stage PASSED" fi
commit 343579dda50fcefd5498cfe146a438b8fdb3c065 Author: Maxim Uvarov maxim.uvarov@linaro.org Date: Wed Dec 14 22:57:55 2016 +0300
linux-gen: pktio ipc: update tests
Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org
diff --git a/test/linux-generic/pktio_ipc/ipc_common.c b/test/linux-generic/pktio_ipc/ipc_common.c index 387c921..85cbc8b 100644 --- a/test/linux-generic/pktio_ipc/ipc_common.c +++ b/test/linux-generic/pktio_ipc/ipc_common.c @@ -8,7 +8,8 @@
/** Run time in seconds */ int run_time_sec; -int ipc_name_space; +/** Pid of the master process */ +int master_pid;
int ipc_odp_packet_send_or_free(odp_pktio_t pktio, odp_packet_t pkt_tbl[], int num) @@ -33,6 +34,7 @@ int ipc_odp_packet_send_or_free(odp_pktio_t pktio, while (sent != num) { ret = odp_pktout_send(pktout, &pkt_tbl[sent], num - sent); if (ret < 0) { + EXAMPLE_ERR("odp_pktout_send return %d\n", ret); for (i = sent; i < num; i++) odp_packet_free(pkt_tbl[i]); return -1; @@ -43,6 +45,7 @@ int ipc_odp_packet_send_or_free(odp_pktio_t pktio, if (odp_time_cmp(end_time, odp_time_local()) < 0) { for (i = sent; i < num; i++) odp_packet_free(pkt_tbl[i]); + EXAMPLE_ERR("Send Timeout!\n"); return -1; } } @@ -50,17 +53,25 @@ int ipc_odp_packet_send_or_free(odp_pktio_t pktio, return 0; }
-odp_pktio_t create_pktio(odp_pool_t pool) +odp_pktio_t create_pktio(odp_pool_t pool, int master_pid) { odp_pktio_param_t pktio_param; odp_pktio_t ipc_pktio; + char name[30];
odp_pktio_param_init(&pktio_param);
- printf("pid: %d, create IPC pktio\n", getpid()); - ipc_pktio = odp_pktio_open("ipc_pktio", pool, &pktio_param); - if (ipc_pktio == ODP_PKTIO_INVALID) - EXAMPLE_ABORT("Error: ipc pktio create failed.\n"); + if (master_pid) + sprintf(name, TEST_IPC_PKTIO_PID_NAME, master_pid); + else + sprintf(name, TEST_IPC_PKTIO_NAME); + + printf("pid: %d, create IPC pktio %s\n", getpid(), name); + ipc_pktio = odp_pktio_open(name, pool, &pktio_param); + if (ipc_pktio == ODP_PKTIO_INVALID) { + EXAMPLE_ERR("Error: ipc pktio %s create failed.\n", name); + return ODP_PKTIO_INVALID; + }
if (odp_pktin_queue_config(ipc_pktio, NULL)) { EXAMPLE_ERR("Input queue config failed\n"); @@ -88,16 +99,16 @@ void parse_args(int argc, char *argv[]) int long_index; static struct option longopts[] = { {"time", required_argument, NULL, 't'}, - {"ns", required_argument, NULL, 'n'}, /* ipc name space */ + {"pid", required_argument, NULL, 'p'}, /* master process pid */ {"help", no_argument, NULL, 'h'}, /* return 'h' */ {NULL, 0, NULL, 0} };
run_time_sec = 0; /* loop forever if time to run is 0 */ - ipc_name_space = 0; + master_pid = 0;
while (1) { - opt = getopt_long(argc, argv, "+t:n:h", + opt = getopt_long(argc, argv, "+t:p:h", longopts, &long_index);
if (opt == -1) @@ -107,24 +118,18 @@ void parse_args(int argc, char *argv[]) case 't': run_time_sec = atoi(optarg); break; - case 'n': - ipc_name_space = atoi(optarg); + case 'p': + master_pid = atoi(optarg); break; case 'h': + default: usage(argv[0]); exit(EXIT_SUCCESS); break; - default: - break; } }
optind = 1; /* reset 'extern optind' from the getopt lib */ - - if (!ipc_name_space) { - usage(argv[0]); - exit(1); - } }
/** diff --git a/test/linux-generic/pktio_ipc/ipc_common.h b/test/linux-generic/pktio_ipc/ipc_common.h index 99276b5..8804994 100644 --- a/test/linux-generic/pktio_ipc/ipc_common.h +++ b/test/linux-generic/pktio_ipc/ipc_common.h @@ -30,7 +30,7 @@ /** @def SHM_PKT_POOL_BUF_SIZE * @brief Buffer size of the packet pool buffer */ -#define SHM_PKT_POOL_BUF_SIZE 1856 +#define SHM_PKT_POOL_BUF_SIZE 100
/** @def MAX_PKT_BURST * @brief Maximum number of packet bursts @@ -46,6 +46,12 @@
#define TEST_ALLOC_MAGIC 0x1234adcd
+#define TEST_IPC_PKTIO_NAME "ipc:ipktio" +#define TEST_IPC_PKTIO_PID_NAME "ipc:%d:ipktio" + +/** Can be any name, same or not the same. */ +#define TEST_IPC_POOL_NAME "ipc_packet_pool" + /** magic number and sequence at start of packet payload */ typedef struct ODP_PACKED { odp_u32be_t magic; @@ -63,8 +69,8 @@ char *pktio_name; /** Run time in seconds */ int run_time_sec;
-/** IPC name space id /dev/shm/odp-nsid-objname */ -int ipc_name_space; +/** PID of the master process */ +int master_pid;
/* helper funcs */ void parse_args(int argc, char *argv[]); @@ -75,11 +81,12 @@ void usage(char *progname); * Create a ipc pktio handle. * * @param pool Pool to associate with device for packet RX/TX + * @param master_pid Pid of master process * * @return The handle of the created pktio object. * @retval ODP_PKTIO_INVALID if the create fails. */ -odp_pktio_t create_pktio(odp_pool_t pool); +odp_pktio_t create_pktio(odp_pool_t pool, int master_pid);
/** Spin and send all packet from table * diff --git a/test/linux-generic/pktio_ipc/pktio_ipc1.c b/test/linux-generic/pktio_ipc/pktio_ipc1.c index 5c1da23..838b672 100644 --- a/test/linux-generic/pktio_ipc/pktio_ipc1.c +++ b/test/linux-generic/pktio_ipc/pktio_ipc1.c @@ -23,9 +23,8 @@ */ static int pktio_run_loop(odp_pool_t pool) { - int thr; int pkts; - odp_pktio_t ipc_pktio; + odp_pktio_t ipc_pktio = ODP_PKTIO_INVALID; odp_packet_t pkt_tbl[MAX_PKT_BURST]; uint64_t cnt = 0; /* increasing counter on each send packet */ uint64_t cnt_recv = 0; /* increasing counter to validate @@ -42,22 +41,41 @@ static int pktio_run_loop(odp_pool_t pool) odp_time_t wait; int ret; odp_pktin_queue_t pktin; + char name[30];
- thr = odp_thread_id(); - - ipc_pktio = odp_pktio_lookup("ipc_pktio"); - if (ipc_pktio == ODP_PKTIO_INVALID) { - EXAMPLE_ERR(" [%02i] Error: lookup of pktio %s failed\n", - thr, "ipc_pktio"); - return -2; - } - printf(" [%02i] looked up ipc_pktio:%02" PRIu64 ", burst mode\n", - thr, odp_pktio_to_u64(ipc_pktio)); + if (master_pid) + sprintf(name, TEST_IPC_PKTIO_PID_NAME, master_pid); + else + sprintf(name, TEST_IPC_PKTIO_NAME);
wait = odp_time_local_from_ns(run_time_sec * ODP_TIME_SEC_IN_NS); start_cycle = odp_time_local(); current_cycle = start_cycle;
+ /* slave process should always be run after master process to be + * able to create the same pktio. + */ + for (;;) { + if (run_time_sec) { + cycle = odp_time_local(); + diff = odp_time_diff(cycle, start_cycle); + if (odp_time_cmp(wait, diff) < 0) { + printf("timeout exit, run_time_sec %d\n", + run_time_sec); + return -1; + } + } + + ipc_pktio = create_pktio(pool, master_pid); + if (ipc_pktio != ODP_PKTIO_INVALID) + break; + if (!master_pid) + break; + } + + if (ipc_pktio == ODP_PKTIO_INVALID) + return -1; + if (odp_pktin_queue(ipc_pktio, &pktin, 1) != 1) { EXAMPLE_ERR("no input queue\n"); return -1; @@ -110,8 +128,12 @@ static int pktio_run_loop(odp_pool_t pool) size_t off;
off = odp_packet_l4_offset(pkt); - if (off == ODP_PACKET_OFFSET_INVALID) - EXAMPLE_ABORT("invalid l4 offset\n"); + if (off == ODP_PACKET_OFFSET_INVALID) { + stat_errors++; + stat_free++; + odp_packet_free(pkt); + EXAMPLE_ERR("invalid l4 offset\n"); + }
off += ODPH_UDPHDR_LEN; ret = odp_packet_copy_to_mem(pkt, off, @@ -279,17 +301,13 @@ int main(int argc, char *argv[]) odp_pool_t pool; odp_pool_param_t params; odp_instance_t instance; - odp_platform_init_t plat_idata; int ret;
/* Parse and store the application arguments */ parse_args(argc, argv);
- memset(&plat_idata, 0, sizeof(odp_platform_init_t)); - plat_idata.ipc_ns = ipc_name_space; - /* Init ODP before calling anything else */ - if (odp_init_global(&instance, NULL, &plat_idata)) { + if (odp_init_global(&instance, NULL, NULL)) { EXAMPLE_ERR("Error: ODP global init failed.\n"); exit(EXIT_FAILURE); } @@ -310,7 +328,7 @@ int main(int argc, char *argv[]) params.pkt.num = SHM_PKT_POOL_SIZE; params.type = ODP_POOL_PACKET;
- pool = odp_pool_create("packet_pool1", ¶ms); + pool = odp_pool_create(TEST_IPC_POOL_NAME, ¶ms); if (pool == ODP_POOL_INVALID) { EXAMPLE_ERR("Error: packet pool create failed.\n"); exit(EXIT_FAILURE); @@ -318,8 +336,6 @@ int main(int argc, char *argv[])
odp_pool_print(pool);
- create_pktio(pool); - ret = pktio_run_loop(pool);
if (odp_pool_destroy(pool)) { diff --git a/test/linux-generic/pktio_ipc/pktio_ipc2.c b/test/linux-generic/pktio_ipc/pktio_ipc2.c index 5c1f142..fb6f994 100644 --- a/test/linux-generic/pktio_ipc/pktio_ipc2.c +++ b/test/linux-generic/pktio_ipc/pktio_ipc2.c @@ -16,9 +16,9 @@
#include "ipc_common.h"
-static int ipc_second_process(void) +static int ipc_second_process(int master_pid) { - odp_pktio_t ipc_pktio; + odp_pktio_t ipc_pktio = ODP_PKTIO_INVALID; odp_pool_param_t params; odp_pool_t pool; odp_packet_t pkt_tbl[MAX_PKT_BURST]; @@ -40,18 +40,44 @@ static int ipc_second_process(void) params.pkt.num = SHM_PKT_POOL_SIZE; params.type = ODP_POOL_PACKET;
- pool = odp_pool_create("packet_pool2", ¶ms); + pool = odp_pool_create(TEST_IPC_POOL_NAME, ¶ms); if (pool == ODP_POOL_INVALID) { EXAMPLE_ERR("Error: packet pool create failed.\n"); exit(EXIT_FAILURE); }
- ipc_pktio = create_pktio(pool); - wait = odp_time_local_from_ns(run_time_sec * ODP_TIME_SEC_IN_NS); start_cycle = odp_time_local();
+ /* slave process should always be run after master process to be + * able to create the same pktio. + */ + for (;;) { + /* exit loop if time specified */ + if (run_time_sec) { + cycle = odp_time_local(); + diff = odp_time_diff(cycle, start_cycle); + if (odp_time_cmp(wait, diff) < 0) { + printf("timeout exit, run_time_sec %d\n", + run_time_sec); + goto not_started; + } + } + + ipc_pktio = create_pktio(pool, master_pid); + if (ipc_pktio != ODP_PKTIO_INVALID) + break; + if (!master_pid) + break; + } + + if (ipc_pktio == ODP_PKTIO_INVALID) { + odp_pool_destroy(pool); + return -1; + } + if (odp_pktin_queue(ipc_pktio, &pktin, 1) != 1) { + odp_pool_destroy(pool); EXAMPLE_ERR("no input queue\n"); return -1; } @@ -97,8 +123,12 @@ static int ipc_second_process(void) size_t off;
off = odp_packet_l4_offset(pkt); - if (off == ODP_PACKET_OFFSET_INVALID) - EXAMPLE_ABORT("invalid l4 offset\n"); + if (off == ODP_PACKET_OFFSET_INVALID) { + EXAMPLE_ERR("invalid l4 offset\n"); + for (int j = i; j < pkts; j++) + odp_packet_free(pkt_tbl[j]); + break; + }
off += ODPH_UDPHDR_LEN; ret = odp_packet_copy_to_mem(pkt, off, sizeof(head), @@ -106,8 +136,12 @@ static int ipc_second_process(void) if (ret) EXAMPLE_ABORT("unable copy out head data");
- if (head.magic != TEST_SEQ_MAGIC) - EXAMPLE_ABORT("Wrong head magic!"); + if (head.magic != TEST_SEQ_MAGIC) { + EXAMPLE_ERR("Wrong head magic! %x", head.magic); + for (int j = i; j < pkts; j++) + odp_packet_free(pkt_tbl[j]); + break; + }
/* Modify magic number in packet */ head.magic = TEST_SEQ_MAGIC_2; @@ -118,7 +152,7 @@ static int ipc_second_process(void) }
/* send all packets back */ - ret = ipc_odp_packet_send_or_free(ipc_pktio, pkt_tbl, pkts); + ret = ipc_odp_packet_send_or_free(ipc_pktio, pkt_tbl, i); if (ret < 0) EXAMPLE_ABORT("can not send packets\n");
@@ -176,16 +210,12 @@ not_started: int main(int argc, char *argv[]) { odp_instance_t instance; - odp_platform_init_t plat_idata; int ret;
/* Parse and store the application arguments */ parse_args(argc, argv);
- memset(&plat_idata, 0, sizeof(odp_platform_init_t)); - plat_idata.ipc_ns = ipc_name_space; - - if (odp_init_global(&instance, NULL, &plat_idata)) { + if (odp_init_global(&instance, NULL, NULL)) { EXAMPLE_ERR("Error: ODP global init failed.\n"); exit(EXIT_FAILURE); } @@ -196,7 +226,7 @@ int main(int argc, char *argv[]) exit(EXIT_FAILURE); }
- ret = ipc_second_process(); + ret = ipc_second_process(master_pid);
if (odp_term_local()) { EXAMPLE_ERR("Error: odp_term_local() failed.\n"); diff --git a/test/linux-generic/pktio_ipc/pktio_ipc_run.sh b/test/linux-generic/pktio_ipc/pktio_ipc_run.sh index bd64baf..3cd28f5 100755 --- a/test/linux-generic/pktio_ipc/pktio_ipc_run.sh +++ b/test/linux-generic/pktio_ipc/pktio_ipc_run.sh @@ -20,20 +20,15 @@ PATH=.:$PATH run() { local ret=0 - IPC_NS=`expr $$ + 5000` - IPC_NS=`expr ${IPC_NS} % 65000` - IPC_NS=`expr ${IPC_NS} + 2` - echo "Using ns ${IPC_NS}" - #if test was interrupted with CTRL+c than files #might remain in shm. Needed cleanely delete them. - rm -rf /dev/shm/odp-${IPC_NS}* 2>&1 > /dev/null + rm -rf /tmp/odp-* 2>&1 > /dev/null
echo "==== run pktio_ipc1 then pktio_ipc2 ====" - pktio_ipc1${EXEEXT} -n ${IPC_NS} -t 30 & + pktio_ipc1${EXEEXT} -t 30 & IPC_PID=$!
- pktio_ipc2${EXEEXT} -n ${IPC_NS} -t 10 + pktio_ipc2${EXEEXT} -p ${IPC_PID} -t 10 ret=$? # pktio_ipc1 should do clean up and exit just # after pktio_ipc2 exited. If it does not happen @@ -41,12 +36,12 @@ run() sleep 1 kill ${IPC_PID} 2>&1 > /dev/null if [ $? -eq 0 ]; then - rm -rf /dev/shm/odp-${IPC_NS}* 2>&1 > /dev/null + ls -l /tmp/odp* + rm -rf /tmp/odp-${IPC_PID}* 2>&1 > /dev/null fi
if [ $ret -ne 0 ]; then echo "!!!First stage FAILED $ret!!!" - ls -l /dev/shm/ exit $ret else echo "First stage PASSED" @@ -54,19 +49,17 @@ run()
echo "==== run pktio_ipc2 then pktio_ipc1 ====" - IPC_NS=`expr $IPC_NS - 1` - echo "Using ns ${IPC_NS}" - - pktio_ipc2${EXEEXT} -n ${IPC_NS} -t 10 & + pktio_ipc2${EXEEXT} -t 20 & IPC_PID=$!
- pktio_ipc1${EXEEXT} -n ${IPC_NS} -t 20 + pktio_ipc1${EXEEXT} -p ${IPC_PID} -t 10 ret=$? (kill ${IPC_PID} 2>&1 > /dev/null) > /dev/null || true
if [ $ret -ne 0 ]; then echo "!!! FAILED !!!" - ls -l /dev/shm/ + ls -l /tmp/odp* + rm -rf /tmp/odp-${IPC_PID}* 2>&1 > /dev/null exit $ret else echo "Second stage PASSED"
commit b4e0a8b91a422cbf28e0406a5076025894103984 Author: Maxim Uvarov maxim.uvarov@linaro.org Date: Wed Dec 14 22:57:54 2016 +0300
linux-gen: pktio ipc: more accurate settings of pool flags
Make code more accurate and nice view, no functional changes.
Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org
diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 4be3827..cae2759 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -497,14 +497,17 @@ static int check_params(odp_pool_param_t *params)
odp_pool_t odp_pool_create(const char *name, odp_pool_param_t *params) { + uint32_t shm_flags = 0; + + if (check_params(params)) + return ODP_POOL_INVALID; + #ifdef _ODP_PKTIO_IPC if (params && (params->type == ODP_POOL_PACKET)) - return pool_create(name, params, ODP_SHM_PROC); + shm_flags = ODP_SHM_PROC; #endif - if (check_params(params)) - return ODP_POOL_INVALID;
- return pool_create(name, params, 0); + return pool_create(name, params, shm_flags); }
int odp_pool_destroy(odp_pool_t pool_hdl)
commit a611a514682dea61ca142b51a28194a39a286fa7 Author: Maxim Uvarov maxim.uvarov@linaro.org Date: Wed Dec 14 22:57:53 2016 +0300
linux-gen: pktio ipc: ring changes
Make rings visible by other processes.
Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org
diff --git a/platform/linux-generic/pktio/ring.c b/platform/linux-generic/pktio/ring.c index cc84e8a..aeda04b 100644 --- a/platform/linux-generic/pktio/ring.c +++ b/platform/linux-generic/pktio/ring.c @@ -160,7 +160,7 @@ _ring_create(const char *name, unsigned count, unsigned flags) odp_shm_t shm;
if (flags & _RING_SHM_PROC) - shm_flag = ODP_SHM_PROC; + shm_flag = ODP_SHM_PROC | ODP_SHM_EXPORT; else shm_flag = 0;
commit d345d75c975bd98f61bc2e04907b3e232d88083c Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:28:53 2016 +0200
example: ipsec: use op_param_t instead of op_params_t
Type name odp_crypto_op_params_t is deprecated.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/example/ipsec/odp_ipsec.c b/example/ipsec/odp_ipsec.c index 76ced49..7e34d06 100644 --- a/example/ipsec/odp_ipsec.c +++ b/example/ipsec/odp_ipsec.c @@ -148,7 +148,7 @@ typedef struct { uint32_t dst_ip; /**< SA dest IP address */
/* Output only */ - odp_crypto_op_params_t params; /**< Parameters for crypto call */ + odp_crypto_op_param_t params; /**< Parameters for crypto call */ uint32_t *ah_seq; /**< AH sequence number location */ uint32_t *esp_seq; /**< ESP sequence number location */ uint16_t *tun_hdr_id; /**< Tunnel header ID > */ @@ -644,7 +644,7 @@ pkt_disposition_e do_ipsec_in_classify(odp_packet_t pkt, odph_ahhdr_t *ah = NULL; odph_esphdr_t *esp = NULL; ipsec_cache_entry_t *entry; - odp_crypto_op_params_t params; + odp_crypto_op_param_t params; odp_bool_t posted = 0;
/* Default to skip IPsec */ @@ -823,7 +823,7 @@ pkt_disposition_e do_ipsec_out_classify(odp_packet_t pkt, uint16_t ip_data_len = ipv4_data_len(ip); uint8_t *ip_data = ipv4_data_p(ip); ipsec_cache_entry_t *entry; - odp_crypto_op_params_t params; + odp_crypto_op_param_t params; int hdr_len = 0; int trl_len = 0; odph_ahhdr_t *ah = NULL;
commit 7e40217271ae17ce19abd873140439c51a525fb1 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:05:30 2016 +0200
validation: crypto: use algorithm capability
Use new algorithm enumerations and capability functions.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c index 55fc6aa..de9d6e4 100644 --- a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c +++ b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c @@ -11,6 +11,8 @@ #include "odp_crypto_test_inp.h" #include "crypto.h"
+#define MAX_ALG_CAPA 32 + struct suite_context_s { odp_crypto_op_mode_t pref_mode; odp_pool_t pool; @@ -42,8 +44,7 @@ static void alg_test(odp_crypto_op_t op, const uint8_t *ciphertext, unsigned int ciphertext_len, const uint8_t *digest, - unsigned int digest_len - ) + uint32_t digest_len) { odp_crypto_session_t session; odp_crypto_capability_t capability; @@ -57,6 +58,10 @@ static void alg_test(odp_crypto_op_t op, odp_crypto_op_param_t op_params; uint8_t *data_addr; int data_off; + odp_crypto_cipher_capability_t cipher_capa[MAX_ALG_CAPA]; + odp_crypto_auth_capability_t auth_capa[MAX_ALG_CAPA]; + int num, i; + int found;
rc = odp_crypto_capability(&capability); CU_ASSERT(!rc); @@ -65,36 +70,36 @@ static void alg_test(odp_crypto_op_t op, if (cipher_alg == ODP_CIPHER_ALG_3DES_CBC && !(capability.hw_ciphers.bit.trides_cbc)) rc = -1; - if (cipher_alg == ODP_CIPHER_ALG_AES128_CBC && - !(capability.hw_ciphers.bit.aes128_cbc)) + if (cipher_alg == ODP_CIPHER_ALG_AES_CBC && + !(capability.hw_ciphers.bit.aes_cbc)) rc = -1; - if (cipher_alg == ODP_CIPHER_ALG_AES128_GCM && - !(capability.hw_ciphers.bit.aes128_gcm)) + if (cipher_alg == ODP_CIPHER_ALG_AES_GCM && + !(capability.hw_ciphers.bit.aes_gcm)) rc = -1; } else { if (cipher_alg == ODP_CIPHER_ALG_3DES_CBC && !(capability.ciphers.bit.trides_cbc)) rc = -1; - if (cipher_alg == ODP_CIPHER_ALG_AES128_CBC && - !(capability.ciphers.bit.aes128_cbc)) + if (cipher_alg == ODP_CIPHER_ALG_AES_CBC && + !(capability.ciphers.bit.aes_cbc)) rc = -1; - if (cipher_alg == ODP_CIPHER_ALG_AES128_GCM && - !(capability.ciphers.bit.aes128_gcm)) + if (cipher_alg == ODP_CIPHER_ALG_AES_GCM && + !(capability.ciphers.bit.aes_gcm)) rc = -1; }
CU_ASSERT(!rc);
if (capability.hw_auths.all_bits) { - if (auth_alg == ODP_AUTH_ALG_AES128_GCM && - !(capability.hw_auths.bit.aes128_gcm)) + if (auth_alg == ODP_AUTH_ALG_AES_GCM && + !(capability.hw_auths.bit.aes_gcm)) rc = -1; if (auth_alg == ODP_AUTH_ALG_NULL && !(capability.hw_auths.bit.null)) rc = -1; } else { - if (auth_alg == ODP_AUTH_ALG_AES128_GCM && - !(capability.auths.bit.aes128_gcm)) + if (auth_alg == ODP_AUTH_ALG_AES_GCM && + !(capability.auths.bit.aes_gcm)) rc = -1; if (auth_alg == ODP_AUTH_ALG_NULL && !(capability.auths.bit.null)) @@ -103,6 +108,59 @@ static void alg_test(odp_crypto_op_t op,
CU_ASSERT(!rc);
+ num = odp_crypto_cipher_capability(cipher_alg, cipher_capa, + MAX_ALG_CAPA); + + if (cipher_alg != ODP_CIPHER_ALG_NULL) { + CU_ASSERT(num > 0); + found = 0; + } else { + CU_ASSERT(num == 0); + found = 1; + } + + CU_ASSERT(num <= MAX_ALG_CAPA); + + if (num > MAX_ALG_CAPA) + num = MAX_ALG_CAPA; + + /* Search for the test case */ + for (i = 0; i < num; i++) { + if (cipher_capa[i].key_len == cipher_key.length && + cipher_capa[i].iv_len == ses_iv.length) { + found = 1; + break; + } + } + + CU_ASSERT(found); + + num = odp_crypto_auth_capability(auth_alg, auth_capa, MAX_ALG_CAPA); + + if (auth_alg != ODP_AUTH_ALG_NULL) { + CU_ASSERT(num > 0); + found = 0; + } else { + CU_ASSERT(num == 0); + found = 1; + } + + CU_ASSERT(num <= MAX_ALG_CAPA); + + if (num > MAX_ALG_CAPA) + num = MAX_ALG_CAPA; + + /* Search for the test case */ + for (i = 0; i < num; i++) { + if (auth_capa[i].digest_len == digest_len && + auth_capa[i].key_len == auth_key.length) { + found = 1; + break; + } + } + + CU_ASSERT(found); + /* Create a crypto session */ odp_crypto_session_param_init(&ses_params); ses_params.op = op; @@ -345,11 +403,11 @@ void crypto_test_enc_alg_aes128_gcm(void) iv.length = sizeof(aes128_gcm_reference_iv[i]);
alg_test(ODP_CRYPTO_OP_ENCODE, - ODP_CIPHER_ALG_AES128_GCM, + ODP_CIPHER_ALG_AES_GCM, iv, NULL, cipher_key, - ODP_AUTH_ALG_AES128_GCM, + ODP_AUTH_ALG_AES_GCM, auth_key, &aes128_gcm_cipher_range[i], &aes128_gcm_auth_range[i], @@ -381,11 +439,11 @@ void crypto_test_enc_alg_aes128_gcm_ovr_iv(void) cipher_key.length = sizeof(aes128_gcm_reference_key[i]);
alg_test(ODP_CRYPTO_OP_ENCODE, - ODP_CIPHER_ALG_AES128_GCM, + ODP_CIPHER_ALG_AES_GCM, iv, aes128_gcm_reference_iv[i], cipher_key, - ODP_AUTH_ALG_AES128_GCM, + ODP_AUTH_ALG_AES_GCM, auth_key, &aes128_gcm_cipher_range[i], &aes128_gcm_auth_range[i], @@ -420,11 +478,11 @@ void crypto_test_dec_alg_aes128_gcm(void) iv.length = sizeof(aes128_gcm_reference_iv[i]);
alg_test(ODP_CRYPTO_OP_DECODE, - ODP_CIPHER_ALG_AES128_GCM, + ODP_CIPHER_ALG_AES_GCM, iv, NULL, cipher_key, - ODP_AUTH_ALG_AES128_GCM, + ODP_AUTH_ALG_AES_GCM, auth_key, &aes128_gcm_cipher_range[i], &aes128_gcm_auth_range[i], @@ -457,11 +515,11 @@ void crypto_test_dec_alg_aes128_gcm_ovr_iv(void) cipher_key.length = sizeof(aes128_gcm_reference_key[i]);
alg_test(ODP_CRYPTO_OP_DECODE, - ODP_CIPHER_ALG_AES128_GCM, + ODP_CIPHER_ALG_AES_GCM, iv, aes128_gcm_reference_iv[i], cipher_key, - ODP_AUTH_ALG_AES128_GCM, + ODP_AUTH_ALG_AES_GCM, auth_key, &aes128_gcm_cipher_range[i], &aes128_gcm_auth_range[i], @@ -495,7 +553,7 @@ void crypto_test_enc_alg_aes128_cbc(void) iv.length = sizeof(aes128_cbc_reference_iv[i]);
alg_test(ODP_CRYPTO_OP_ENCODE, - ODP_CIPHER_ALG_AES128_CBC, + ODP_CIPHER_ALG_AES_CBC, iv, NULL, cipher_key, @@ -526,7 +584,7 @@ void crypto_test_enc_alg_aes128_cbc_ovr_iv(void) cipher_key.length = sizeof(aes128_cbc_reference_key[i]);
alg_test(ODP_CRYPTO_OP_ENCODE, - ODP_CIPHER_ALG_AES128_CBC, + ODP_CIPHER_ALG_AES_CBC, iv, aes128_cbc_reference_iv[i], cipher_key, @@ -561,7 +619,7 @@ void crypto_test_dec_alg_aes128_cbc(void) iv.length = sizeof(aes128_cbc_reference_iv[i]);
alg_test(ODP_CRYPTO_OP_DECODE, - ODP_CIPHER_ALG_AES128_CBC, + ODP_CIPHER_ALG_AES_CBC, iv, NULL, cipher_key, @@ -594,7 +652,7 @@ void crypto_test_dec_alg_aes128_cbc_ovr_iv(void) cipher_key.length = sizeof(aes128_cbc_reference_key[i]);
alg_test(ODP_CRYPTO_OP_DECODE, - ODP_CIPHER_ALG_AES128_CBC, + ODP_CIPHER_ALG_AES_CBC, iv, aes128_cbc_reference_iv[i], cipher_key, @@ -634,7 +692,7 @@ void crypto_test_alg_hmac_md5(void) iv, iv.data, cipher_key, - ODP_AUTH_ALG_MD5_96, + ODP_AUTH_ALG_MD5_HMAC, auth_key, NULL, NULL, hmac_md5_reference_plaintext[i], @@ -672,7 +730,7 @@ void crypto_test_alg_hmac_sha256(void) iv, iv.data, cipher_key, - ODP_AUTH_ALG_SHA256_128, + ODP_AUTH_ALG_SHA256_HMAC, auth_key, NULL, NULL, hmac_sha256_reference_plaintext[i],
commit fbc400dc8c35c220cbb41531d12c933f7c4226d1 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:05:29 2016 +0200
test: crypto: use odp_crypto_session_param_init
Use session param init function instead of memset() to zero.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/example/ipsec/odp_ipsec_cache.c b/example/ipsec/odp_ipsec_cache.c index 2bd44cf..b2a91c2 100644 --- a/example/ipsec/odp_ipsec_cache.c +++ b/example/ipsec/odp_ipsec_cache.c @@ -44,7 +44,7 @@ int create_ipsec_cache_entry(sa_db_entry_t *cipher_sa, odp_queue_t completionq, odp_pool_t out_pool) { - odp_crypto_session_params_t params; + odp_crypto_session_param_t params; ipsec_cache_entry_t *entry; odp_crypto_ses_create_err_t ses_create_rc; odp_crypto_session_t session; @@ -60,6 +60,8 @@ int create_ipsec_cache_entry(sa_db_entry_t *cipher_sa, (cipher_sa->mode != auth_sa->mode)) return -1;
+ odp_crypto_session_param_init(¶ms); + /* Setup parameters and call crypto library to create session */ params.op = (in) ? ODP_CRYPTO_OP_DECODE : ODP_CRYPTO_OP_ENCODE; params.auth_cipher_text = TRUE; diff --git a/test/common_plat/performance/odp_crypto.c b/test/common_plat/performance/odp_crypto.c index 39df78b..9936288 100644 --- a/test/common_plat/performance/odp_crypto.c +++ b/test/common_plat/performance/odp_crypto.c @@ -49,7 +49,7 @@ static uint8_t test_key24[24] = { 0x01, 0x02, 0x03, 0x04, 0x05, */ typedef struct { const char *name; /**< Algorithm name */ - odp_crypto_session_params_t session; /**< Prefilled crypto session params */ + odp_crypto_session_param_t session; /**< Prefilled crypto session params */ unsigned int hash_adjust; /**< Size of hash */ } crypto_alg_config_t;
@@ -420,12 +420,13 @@ create_session_from_config(odp_crypto_session_t *session, crypto_alg_config_t *config, crypto_args_t *cargs) { - odp_crypto_session_params_t params; + odp_crypto_session_param_t params; odp_crypto_ses_create_err_t ses_create_rc; odp_pool_t pkt_pool; odp_queue_t out_queue;
- memcpy(¶ms, &config->session, sizeof(odp_crypto_session_params_t)); + odp_crypto_session_param_init(¶ms); + memcpy(¶ms, &config->session, sizeof(odp_crypto_session_param_t)); params.op = ODP_CRYPTO_OP_ENCODE; params.pref_mode = ODP_CRYPTO_SYNC;
@@ -468,7 +469,7 @@ run_measure_one(crypto_args_t *cargs, unsigned int payload_length, crypto_run_result_t *result) { - odp_crypto_op_params_t params; + odp_crypto_op_param_t params;
odp_pool_t pkt_pool; odp_queue_t out_queue; diff --git a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c index 4ac4a07..55fc6aa 100644 --- a/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c +++ b/test/common_plat/validation/api/crypto/odp_crypto_test_inp.c @@ -53,8 +53,8 @@ static void alg_test(odp_crypto_op_t op, odp_event_t event; odp_crypto_compl_t compl_event; odp_crypto_op_result_t result; - odp_crypto_session_params_t ses_params; - odp_crypto_op_params_t op_params; + odp_crypto_session_param_t ses_params; + odp_crypto_op_param_t op_params; uint8_t *data_addr; int data_off;
@@ -104,7 +104,7 @@ static void alg_test(odp_crypto_op_t op, CU_ASSERT(!rc);
/* Create a crypto session */ - memset(&ses_params, 0, sizeof(ses_params)); + odp_crypto_session_param_init(&ses_params); ses_params.op = op; ses_params.auth_cipher_text = false; ses_params.pref_mode = suite_context.pref_mode;
commit 6cef872afcac78016d095d426fa9f3d9055c3856 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:05:28 2016 +0200
api: crypto: documentation clean up
Moved documentation of struct fields over each field. Removed references to buffers as crypto API works only with packets.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/crypto.h b/include/odp/api/spec/crypto.h index 0fb6d05..9855bf9 100644 --- a/include/odp/api/spec/crypto.h +++ b/include/odp/api/spec/crypto.h @@ -198,117 +198,166 @@ typedef union odp_crypto_auth_algos_t { * Crypto API key structure */ typedef struct odp_crypto_key { - uint8_t *data; /**< Key data */ - uint32_t length; /**< Key length in bytes */ + /** Key data */ + uint8_t *data; + + /** Key length in bytes */ + uint32_t length; + } odp_crypto_key_t;
/** * Crypto API IV structure */ typedef struct odp_crypto_iv { - uint8_t *data; /**< IV data */ - uint32_t length; /**< IV length in bytes */ + /** IV data */ + uint8_t *data; + + /** IV length in bytes */ + uint32_t length; + } odp_crypto_iv_t;
/** * Crypto API data range specifier */ typedef struct odp_crypto_data_range { - uint32_t offset; /**< Offset from beginning of buffer (chain) */ - uint32_t length; /**< Length of data to operate on */ + /** Offset from beginning of packet */ + uint32_t offset; + + /** Length of data to operate on */ + uint32_t length; + } odp_crypto_data_range_t;
/** * Crypto API session creation parameters */ typedef struct odp_crypto_session_param_t { - odp_crypto_op_t op; /**< Encode versus decode */ - odp_bool_t auth_cipher_text; /**< Authenticate/cipher ordering */ - odp_crypto_op_mode_t pref_mode; /**< Preferred sync vs async */ - odp_cipher_alg_t cipher_alg; /**< Cipher algorithm */ - odp_crypto_key_t cipher_key; /**< Cipher key */ - odp_crypto_iv_t iv; /**< Cipher Initialization Vector (IV) */ - odp_auth_alg_t auth_alg; /**< Authentication algorithm */ - odp_crypto_key_t auth_key; /**< Authentication key */ - odp_queue_t compl_queue; /**< Async mode completion event queue */ - odp_pool_t output_pool; /**< Output buffer pool */ + /** Encode vs. decode operation */ + odp_crypto_op_t op; + + /** Authenticate cipher vs. plain text + * + * Controls ordering of authentication and cipher operations, + * and is relative to the operation (encode vs decode). When encoding, + * TRUE indicates the authentication operation should be performed + * after the cipher operation else before. When decoding, TRUE + * indicates the reverse order of operation. + * + * true: Authenticate cipher text + * false: Authenticate plain text + */ + odp_bool_t auth_cipher_text; + + /** Preferred sync vs. async */ + odp_crypto_op_mode_t pref_mode; + + /** Cipher algorithm + * + * Use odp_crypto_capability() for supported algorithms. + */ + odp_cipher_alg_t cipher_alg; + + /** Cipher key + * + * Use odp_crypto_cipher_capa() for supported key and IV lengths. + */ + odp_crypto_key_t cipher_key; + + /** Cipher Initialization Vector (IV) */ + odp_crypto_iv_t iv; + + /** Authentication algorithm + * + * Use odp_crypto_capability() for supported algorithms. + */ + odp_auth_alg_t auth_alg; + + /** Authentication key + * + * Use odp_crypto_auth_capa() for supported digest and key lengths. + */ + odp_crypto_key_t auth_key; + + /** Async mode completion event queue + * + * When odp_crypto_operation() is asynchronous, the completion queue is + * used to return the completion status of the operation to the + * application. + */ + odp_queue_t compl_queue; + + /** Output pool + * + * When the output packet is not specified during the call to + * odp_crypto_operation(), the output packet will be allocated + * from this pool. + */ + odp_pool_t output_pool; + } odp_crypto_session_param_t;
/** @deprecated Use odp_crypto_session_param_t instead */ typedef odp_crypto_session_param_t odp_crypto_session_params_t;
/** - * @var odp_crypto_session_params_t::auth_cipher_text - * - * Controls ordering of authentication and cipher operations, - * and is relative to the operation (encode vs decode). - * When encoding, @c TRUE indicates the authentication operation - * should be performed @b after the cipher operation else before. - * When decoding, @c TRUE indicates the reverse order of operation. - * - * @var odp_crypto_session_params_t::compl_queue - * - * When the API operates asynchronously, the completion queue is - * used to return the completion status of the operation to the - * application. - * - * @var odp_crypto_session_params_t::output_pool - * - * When the output packet is not specified during the call to - * odp_crypto_operation, the output packet buffer will be allocated - * from this pool. - */ - -/** * Crypto API per packet operation parameters */ typedef struct odp_crypto_op_param_t { - odp_crypto_session_t session; /**< Session handle from creation */ - void *ctx; /**< User context */ - odp_packet_t pkt; /**< Input packet buffer */ - odp_packet_t out_pkt; /**< Output packet buffer */ - uint8_t *override_iv_ptr; /**< Override session IV pointer */ - uint32_t hash_result_offset; /**< Offset from start of packet buffer for hash result */ - odp_crypto_data_range_t cipher_range; /**< Data range to apply cipher */ - odp_crypto_data_range_t auth_range; /**< Data range to authenticate */ + /** Session handle from creation */ + odp_crypto_session_t session; + + /** User context */ + void *ctx; + + /** Input packet + * + * Specifies the input packet for the crypto operation. When the + * 'out_pkt' variable is set to ODP_PACKET_INVALID (indicating a new + * packet should be allocated for the resulting packet). + */ + odp_packet_t pkt; + + /** Output packet + * + * Both "in place" (the original packet 'pkt' is modified) and + * "copy" (the packet is replicated to a new packet which contains + * the modified data) modes are supported. The "in place" mode of + * operation is indicated by setting 'out_pkt' equal to 'pkt'. + * For the copy mode of operation, setting 'out_pkt' to a valid packet + * value indicates the caller wishes to specify the destination packet. + * Setting 'out_pkt' to ODP_PACKET_INVALID indicates the caller wishes + * the destination packet be allocated from the output pool specified + * during session creation. + */ + odp_packet_t out_pkt; + + /** Override session IV pointer */ + uint8_t *override_iv_ptr; + + /** Offset from start of packet for hash result + * + * Specifies the offset where the hash result is to be stored. In case + * of decode sessions, input hash values will be read from this offset, + * and overwritten with hash results. If this offset lies within + * specified 'auth_range', implementation will mute this field before + * calculating the hash result. + */ + uint32_t hash_result_offset; + + /** Data range to apply cipher */ + odp_crypto_data_range_t cipher_range; + + /** Data range to authenticate */ + odp_crypto_data_range_t auth_range; + } odp_crypto_op_param_t;
/** @deprecated Use odp_crypto_op_param_t instead */ typedef odp_crypto_op_param_t odp_crypto_op_params_t;
/** - * @var odp_crypto_op_params_t::pkt - * Specifies the input packet buffer for the crypto operation. When the - * @c out_pkt variable is set to @c ODP_PACKET_INVALID (indicating a new - * buffer should be allocated for the resulting packet), the #define TBD - * indicates whether the implementation will free the input packet buffer - * or if it becomes the responsibility of the caller. - * - * @var odp_crypto_op_params_t::out_pkt - * - * The API supports both "in place" (the original packet "pkt" is - * modified) and "copy" (the packet is replicated to a new buffer - * which contains the modified data). - * - * The "in place" mode of operation is indicated by setting @c out_pkt - * equal to @c pkt. For the copy mode of operation, setting @c out_pkt - * to a valid packet buffer value indicates the caller wishes to specify - * the destination buffer. Setting @c out_pkt to @c ODP_PACKET_INVALID - * indicates the caller wishes the destination packet buffer be allocated - * from the output pool specified during session creation. - * - * @var odp_crypto_op_params_t::hash_result_offset - * - * Specifies the offset where the hash result is to be stored. In case of - * decode sessions, input hash values will be read from this offset, and - * overwritten with hash results. If this offset lies within specified - * auth_range, implementation will mute this field before calculating the hash - * result. - * - * @sa odp_crypto_session_params_t::output_pool. - */ - -/** * Crypto API session creation return code */ typedef enum { @@ -346,7 +395,7 @@ typedef enum { ODP_CRYPTO_HW_ERR_NONE, /** Error detected during DMA of data */ ODP_CRYPTO_HW_ERR_DMA, - /** Operation failed due to buffer pool depletion */ + /** Operation failed due to pool depletion */ ODP_CRYPTO_HW_ERR_BP_DEPLETED, } odp_crypto_hw_err_t;
@@ -354,19 +403,33 @@ typedef enum { * Cryto API per packet operation completion status */ typedef struct odp_crypto_compl_status { - odp_crypto_alg_err_t alg_err; /**< Algorithm specific return code */ - odp_crypto_hw_err_t hw_err; /**< Hardware specific return code */ + /** Algorithm specific return code */ + odp_crypto_alg_err_t alg_err; + + /** Hardware specific return code */ + odp_crypto_hw_err_t hw_err; + } odp_crypto_compl_status_t;
/** * Crypto API operation result */ typedef struct odp_crypto_op_result { - odp_bool_t ok; /**< Request completed successfully */ - void *ctx; /**< User context from request */ - odp_packet_t pkt; /**< Output packet */ - odp_crypto_compl_status_t cipher_status; /**< Cipher status */ - odp_crypto_compl_status_t auth_status; /**< Authentication status */ + /** Request completed successfully */ + odp_bool_t ok; + + /** User context from request */ + void *ctx; + + /** Output packet */ + odp_packet_t pkt; + + /** Cipher status */ + odp_crypto_compl_status_t cipher_status; + + /** Authentication status */ + odp_crypto_compl_status_t auth_status; + } odp_crypto_op_result_t;
/**
commit ffb22c37e4a483cc647c8ba8f4a9329fa83639aa Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:05:27 2016 +0200
api: crypto: added session param init
Added session parameter init function which should be used to initialize the structure before calling odp_crypto_session_create().
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/crypto.h b/include/odp/api/spec/crypto.h index 4b94824..0fb6d05 100644 --- a/include/odp/api/spec/crypto.h +++ b/include/odp/api/spec/crypto.h @@ -479,7 +479,11 @@ int odp_crypto_auth_capability(odp_auth_alg_t auth, odp_crypto_auth_capability_t capa[], int num);
/** - * Crypto session creation (synchronous) + * Crypto session creation + * + * Create a crypto session according to the session parameters. Use + * odp_crypto_session_param_init() to initialize parameters into their + * default values. * * @param param Session parameters * @param session Created session else ODP_CRYPTO_SESSION_INVALID @@ -589,6 +593,16 @@ uint64_t odp_crypto_session_to_u64(odp_crypto_session_t hdl); uint64_t odp_crypto_compl_to_u64(odp_crypto_compl_t hdl);
/** + * Initialize crypto session parameters + * + * Initialize an odp_crypto_session_param_t to its default values for + * all fields. + * + * @param param Pointer to odp_crypto_session_param_t to be initialized + */ +void odp_crypto_session_param_init(odp_crypto_session_param_t *param); + +/** * @} */
diff --git a/platform/linux-generic/odp_crypto.c b/platform/linux-generic/odp_crypto.c index fd121c8..6b7d60e 100644 --- a/platform/linux-generic/odp_crypto.c +++ b/platform/linux-generic/odp_crypto.c @@ -1042,3 +1042,8 @@ odp_crypto_compl_free(odp_crypto_compl_t completion_event) odp_buffer_from_event((odp_event_t)completion_event), ODP_EVENT_PACKET); } + +void odp_crypto_session_param_init(odp_crypto_session_param_t *param) +{ + memset(param, 0, sizeof(odp_crypto_session_param_t)); +}
commit 3fd85fb6f45d859e6f19eeadda69992858f06f22 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:05:26 2016 +0200
linux-gen: crypto: add support to new enumerations
Added support for new algorithm enumerations and algorithm capability functions.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_crypto_internal.h b/platform/linux-generic/include/odp_crypto_internal.h index 7b4eb61..c7b893a 100644 --- a/platform/linux-generic/include/odp_crypto_internal.h +++ b/platform/linux-generic/include/odp_crypto_internal.h @@ -14,6 +14,7 @@ extern "C" { #include <openssl/des.h> #include <openssl/aes.h>
+#define MAX_IV_LEN 64 #define OP_RESULT_MAGIC 0x91919191
/** Forward declaration of session structure */ @@ -31,16 +32,16 @@ odp_crypto_alg_err_t (*crypto_func_t)(odp_crypto_op_param_t *param, */ struct odp_crypto_generic_session { struct odp_crypto_generic_session *next; - odp_crypto_op_t op; + + /* Session creation parameters */ + odp_crypto_session_param_t p; + odp_bool_t do_cipher_first; - odp_queue_t compl_queue; - odp_pool_t output_pool; + struct { - odp_cipher_alg_t alg; - struct { - uint8_t *data; - size_t len; - } iv; + /* Copy of session IV data */ + uint8_t iv_data[MAX_IV_LEN]; + union { struct { DES_key_schedule ks1; @@ -56,8 +57,8 @@ struct odp_crypto_generic_session { } data; crypto_func_t func; } cipher; + struct { - odp_auth_alg_t alg; union { struct { uint8_t key[16]; diff --git a/platform/linux-generic/odp_crypto.c b/platform/linux-generic/odp_crypto.c index 44b8e06..fd121c8 100644 --- a/platform/linux-generic/odp_crypto.c +++ b/platform/linux-generic/odp_crypto.c @@ -249,8 +249,8 @@ odp_crypto_alg_err_t aes_encrypt(odp_crypto_op_param_t *param,
if (param->override_iv_ptr) iv_ptr = param->override_iv_ptr; - else if (session->cipher.iv.data) - iv_ptr = session->cipher.iv.data; + else if (session->p.iv.data) + iv_ptr = session->cipher.iv_data; else return ODP_CRYPTO_ALG_ERR_IV_INVALID;
@@ -281,8 +281,8 @@ odp_crypto_alg_err_t aes_decrypt(odp_crypto_op_param_t *param,
if (param->override_iv_ptr) iv_ptr = param->override_iv_ptr; - else if (session->cipher.iv.data) - iv_ptr = session->cipher.iv.data; + else if (session->p.iv.data) + iv_ptr = session->cipher.iv_data; else return ODP_CRYPTO_ALG_ERR_IV_INVALID;
@@ -302,22 +302,20 @@ odp_crypto_alg_err_t aes_decrypt(odp_crypto_op_param_t *param, return ODP_CRYPTO_ALG_ERR_NONE; }
-static -int process_aes_param(odp_crypto_generic_session_t *session, - odp_crypto_session_param_t *param) +static int process_aes_param(odp_crypto_generic_session_t *session) { /* Verify IV len is either 0 or 16 */ - if (!((0 == param->iv.length) || (16 == param->iv.length))) + if (!((0 == session->p.iv.length) || (16 == session->p.iv.length))) return -1;
/* Set function */ - if (ODP_CRYPTO_OP_ENCODE == param->op) { + if (ODP_CRYPTO_OP_ENCODE == session->p.op) { session->cipher.func = aes_encrypt; - AES_set_encrypt_key(param->cipher_key.data, 128, + AES_set_encrypt_key(session->p.cipher_key.data, 128, &session->cipher.data.aes.key); } else { session->cipher.func = aes_decrypt; - AES_set_decrypt_key(param->cipher_key.data, 128, + AES_set_decrypt_key(session->p.cipher_key.data, 128, &session->cipher.data.aes.key); }
@@ -340,8 +338,8 @@ odp_crypto_alg_err_t aes_gcm_encrypt(odp_crypto_op_param_t *param,
if (param->override_iv_ptr) iv_ptr = param->override_iv_ptr; - else if (session->cipher.iv.data) - iv_ptr = session->cipher.iv.data; + else if (session->p.iv.data) + iv_ptr = session->cipher.iv_data; else return ODP_CRYPTO_ALG_ERR_IV_INVALID;
@@ -405,8 +403,8 @@ odp_crypto_alg_err_t aes_gcm_decrypt(odp_crypto_op_param_t *param,
if (param->override_iv_ptr) iv_ptr = param->override_iv_ptr; - else if (session->cipher.iv.data) - iv_ptr = session->cipher.iv.data; + else if (session->p.iv.data) + iv_ptr = session->cipher.iv_data; else return ODP_CRYPTO_ALG_ERR_IV_INVALID;
@@ -455,19 +453,17 @@ odp_crypto_alg_err_t aes_gcm_decrypt(odp_crypto_op_param_t *param, return ODP_CRYPTO_ALG_ERR_NONE; }
-static -int process_aes_gcm_param(odp_crypto_generic_session_t *session, - odp_crypto_session_param_t *param) +static int process_aes_gcm_param(odp_crypto_generic_session_t *session) { /* Verify Key len is 16 */ - if (param->cipher_key.length != 16) + if (session->p.cipher_key.length != 16) return -1;
/* Set function */ EVP_CIPHER_CTX *ctx = session->cipher.data.aes_gcm.ctx = EVP_CIPHER_CTX_new();
- if (ODP_CRYPTO_OP_ENCODE == param->op) { + if (ODP_CRYPTO_OP_ENCODE == session->p.op) { session->cipher.func = aes_gcm_encrypt; EVP_EncryptInit_ex(ctx, EVP_aes_128_gcm(), NULL, NULL, NULL); } else { @@ -476,13 +472,13 @@ int process_aes_gcm_param(odp_crypto_generic_session_t *session, }
EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_IVLEN, - param->iv.length, NULL); - if (ODP_CRYPTO_OP_ENCODE == param->op) { + session->p.iv.length, NULL); + if (ODP_CRYPTO_OP_ENCODE == session->p.op) { EVP_EncryptInit_ex(ctx, NULL, NULL, - param->cipher_key.data, NULL); + session->p.cipher_key.data, NULL); } else { EVP_DecryptInit_ex(ctx, NULL, NULL, - param->cipher_key.data, NULL); + session->p.cipher_key.data, NULL); }
return 0; @@ -499,8 +495,8 @@ odp_crypto_alg_err_t des_encrypt(odp_crypto_op_param_t *param,
if (param->override_iv_ptr) iv_ptr = param->override_iv_ptr; - else if (session->cipher.iv.data) - iv_ptr = session->cipher.iv.data; + else if (session->p.iv.data) + iv_ptr = session->cipher.iv_data; else return ODP_CRYPTO_ALG_ERR_IV_INVALID;
@@ -537,8 +533,8 @@ odp_crypto_alg_err_t des_decrypt(odp_crypto_op_param_t *param,
if (param->override_iv_ptr) iv_ptr = param->override_iv_ptr; - else if (session->cipher.iv.data) - iv_ptr = session->cipher.iv.data; + else if (session->p.iv.data) + iv_ptr = session->cipher.iv_data; else return ODP_CRYPTO_ALG_ERR_IV_INVALID;
@@ -565,38 +561,34 @@ odp_crypto_alg_err_t des_decrypt(odp_crypto_op_param_t *param, return ODP_CRYPTO_ALG_ERR_NONE; }
-static -int process_des_param(odp_crypto_generic_session_t *session, - odp_crypto_session_param_t *param) +static int process_des_param(odp_crypto_generic_session_t *session) { /* Verify IV len is either 0 or 8 */ - if (!((0 == param->iv.length) || (8 == param->iv.length))) + if (!((0 == session->p.iv.length) || (8 == session->p.iv.length))) return -1;
/* Set function */ - if (ODP_CRYPTO_OP_ENCODE == param->op) + if (ODP_CRYPTO_OP_ENCODE == session->p.op) session->cipher.func = des_encrypt; else session->cipher.func = des_decrypt;
/* Convert keys */ - DES_set_key((DES_cblock *)¶m->cipher_key.data[0], + DES_set_key((DES_cblock *)&session->p.cipher_key.data[0], &session->cipher.data.des.ks1); - DES_set_key((DES_cblock *)¶m->cipher_key.data[8], + DES_set_key((DES_cblock *)&session->p.cipher_key.data[8], &session->cipher.data.des.ks2); - DES_set_key((DES_cblock *)¶m->cipher_key.data[16], + DES_set_key((DES_cblock *)&session->p.cipher_key.data[16], &session->cipher.data.des.ks3);
return 0; }
-static -int process_md5_param(odp_crypto_generic_session_t *session, - odp_crypto_session_param_t *param, - uint32_t bits) +static int process_md5_param(odp_crypto_generic_session_t *session, + uint32_t bits) { /* Set function */ - if (ODP_CRYPTO_OP_ENCODE == param->op) + if (ODP_CRYPTO_OP_ENCODE == session->p.op) session->auth.func = md5_gen; else session->auth.func = md5_check; @@ -605,18 +597,16 @@ int process_md5_param(odp_crypto_generic_session_t *session, session->auth.data.md5.bytes = bits / 8;
/* Convert keys */ - memcpy(session->auth.data.md5.key, param->auth_key.data, 16); + memcpy(session->auth.data.md5.key, session->p.auth_key.data, 16);
return 0; }
-static -int process_sha256_param(odp_crypto_generic_session_t *session, - odp_crypto_session_param_t *param, - uint32_t bits) +static int process_sha256_param(odp_crypto_generic_session_t *session, + uint32_t bits) { /* Set function */ - if (ODP_CRYPTO_OP_ENCODE == param->op) + if (ODP_CRYPTO_OP_ENCODE == session->p.op) session->auth.func = sha256_gen; else session->auth.func = sha256_check; @@ -625,7 +615,7 @@ int process_sha256_param(odp_crypto_generic_session_t *session, session->auth.data.sha256.bytes = bits / 8;
/* Convert keys */ - memcpy(session->auth.data.sha256.key, param->auth_key.data, 32); + memcpy(session->auth.data.sha256.key, session->p.auth_key.data, 32);
return 0; } @@ -638,16 +628,23 @@ int odp_crypto_capability(odp_crypto_capability_t *capa) /* Initialize crypto capability structure */ memset(capa, 0, sizeof(odp_crypto_capability_t));
- capa->ciphers.bit.null = 1; - capa->ciphers.bit.des = 1; - capa->ciphers.bit.trides_cbc = 1; - capa->ciphers.bit.aes128_cbc = 1; - capa->ciphers.bit.aes128_gcm = 1; + capa->ciphers.bit.null = 1; + capa->ciphers.bit.des = 1; + capa->ciphers.bit.trides_cbc = 1; + capa->ciphers.bit.aes_cbc = 1; + capa->ciphers.bit.aes_gcm = 1; + + capa->auths.bit.null = 1; + capa->auths.bit.md5_hmac = 1; + capa->auths.bit.sha256_hmac = 1; + capa->auths.bit.aes_gcm = 1;
- capa->auths.bit.null = 1; - capa->auths.bit.md5_96 = 1; - capa->auths.bit.sha256_128 = 1; - capa->auths.bit.aes128_gcm = 1; + /* Deprecated */ + capa->ciphers.bit.aes128_cbc = 1; + capa->ciphers.bit.aes128_gcm = 1; + capa->auths.bit.md5_96 = 1; + capa->auths.bit.sha256_128 = 1; + capa->auths.bit.aes128_gcm = 1;
capa->max_sessions = MAX_SESSIONS;
@@ -749,21 +746,26 @@ odp_crypto_session_create(odp_crypto_session_param_t *param, return -1; }
+ /* Copy parameters */ + session->p = *param; + + /* Copy IV data */ + if (session->p.iv.data) { + if (session->p.iv.length > MAX_IV_LEN) { + ODP_DBG("Maximum IV length exceeded\n"); + return -1; + } + + memcpy(session->cipher.iv_data, session->p.iv.data, + session->p.iv.length); + } + /* Derive order */ if (ODP_CRYPTO_OP_ENCODE == param->op) session->do_cipher_first = param->auth_cipher_text; else session->do_cipher_first = !param->auth_cipher_text;
- /* Copy stuff over */ - session->op = param->op; - session->compl_queue = param->compl_queue; - session->cipher.alg = param->cipher_alg; - session->cipher.iv.data = param->iv.data; - session->cipher.iv.len = param->iv.length; - session->auth.alg = param->auth_alg; - session->output_pool = param->output_pool; - /* Process based on cipher */ switch (param->cipher_alg) { case ODP_CIPHER_ALG_NULL: @@ -772,19 +774,23 @@ odp_crypto_session_create(odp_crypto_session_param_t *param, break; case ODP_CIPHER_ALG_DES: case ODP_CIPHER_ALG_3DES_CBC: - rc = process_des_param(session, param); + rc = process_des_param(session); break; + case ODP_CIPHER_ALG_AES_CBC: + /* deprecated */ case ODP_CIPHER_ALG_AES128_CBC: - rc = process_aes_param(session, param); + rc = process_aes_param(session); break; + case ODP_CIPHER_ALG_AES_GCM: + /* deprecated */ case ODP_CIPHER_ALG_AES128_GCM: /* AES-GCM requires to do both auth and * cipher at the same time */ - if (param->auth_alg != ODP_AUTH_ALG_AES128_GCM) { + if (param->auth_alg == ODP_AUTH_ALG_AES_GCM || + param->auth_alg == ODP_AUTH_ALG_AES128_GCM) + rc = process_aes_gcm_param(session); + else rc = -1; - break; - } - rc = process_aes_gcm_param(session, param); break; default: rc = -1; @@ -802,21 +808,28 @@ odp_crypto_session_create(odp_crypto_session_param_t *param, session->auth.func = null_crypto_routine; rc = 0; break; + case ODP_AUTH_ALG_MD5_HMAC: + /* deprecated */ case ODP_AUTH_ALG_MD5_96: - rc = process_md5_param(session, param, 96); + rc = process_md5_param(session, 96); break; + case ODP_AUTH_ALG_SHA256_HMAC: + /* deprecated */ case ODP_AUTH_ALG_SHA256_128: - rc = process_sha256_param(session, param, 128); + rc = process_sha256_param(session, 128); break; + case ODP_AUTH_ALG_AES_GCM: + /* deprecated */ case ODP_AUTH_ALG_AES128_GCM: /* AES-GCM requires to do both auth and * cipher at the same time */ - if (param->cipher_alg != ODP_CIPHER_ALG_AES128_GCM) { + if (param->cipher_alg == ODP_CIPHER_ALG_AES_GCM || + param->cipher_alg == ODP_CIPHER_ALG_AES128_GCM) { + session->auth.func = null_crypto_routine; + rc = 0; + } else { rc = -1; - break; } - session->auth.func = null_crypto_routine; - rc = 0; break; default: rc = -1; @@ -838,7 +851,8 @@ int odp_crypto_session_destroy(odp_crypto_session_t session) odp_crypto_generic_session_t *generic;
generic = (odp_crypto_generic_session_t *)(intptr_t)session; - if (generic->cipher.alg == ODP_CIPHER_ALG_AES128_GCM) + if (generic->p.cipher_alg == ODP_CIPHER_ALG_AES128_GCM || + generic->p.cipher_alg == ODP_CIPHER_ALG_AES_GCM) EVP_CIPHER_CTX_free(generic->cipher.data.aes_gcm.ctx); memset(generic, 0, sizeof(*generic)); free_session(generic); @@ -859,8 +873,8 @@ odp_crypto_operation(odp_crypto_op_param_t *param,
/* Resolve output buffer */ if (ODP_PACKET_INVALID == param->out_pkt && - ODP_POOL_INVALID != session->output_pool) - param->out_pkt = odp_packet_alloc(session->output_pool, + ODP_POOL_INVALID != session->p.output_pool) + param->out_pkt = odp_packet_alloc(session->p.output_pool, odp_packet_len(param->pkt));
if (odp_unlikely(ODP_PACKET_INVALID == param->out_pkt)) { @@ -900,7 +914,7 @@ odp_crypto_operation(odp_crypto_op_param_t *param, (rc_auth == ODP_CRYPTO_ALG_ERR_NONE);
/* If specified during creation post event to completion queue */ - if (ODP_QUEUE_INVALID != session->compl_queue) { + if (ODP_QUEUE_INVALID != session->p.compl_queue) { odp_event_t completion_event; odp_crypto_generic_op_result_t *op_result;
@@ -913,7 +927,7 @@ odp_crypto_operation(odp_crypto_op_param_t *param, op_result = get_op_result_from_event(completion_event); op_result->magic = OP_RESULT_MAGIC; op_result->result = local_result; - if (odp_queue_enq(session->compl_queue, completion_event)) { + if (odp_queue_enq(session->p.compl_queue, completion_event)) { odp_event_free(completion_event); return -1; }
commit e91877df47118468e940a58047d94fe4195e4b1e Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:25:58 2016 +0200
linux-gen: crypto: add algo capability functions
Implemented cipher and authentication algorithm capability functions.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_crypto.c b/platform/linux-generic/odp_crypto.c index 70d3a97..44b8e06 100644 --- a/platform/linux-generic/odp_crypto.c +++ b/platform/linux-generic/odp_crypto.c @@ -27,6 +27,37 @@
#define MAX_SESSIONS 32
+/* + * Cipher algorithm capabilities + * + * Keep sorted: first by key length, then by IV length + */ +static const odp_crypto_cipher_capability_t cipher_capa_des[] = { +{.key_len = 24, .iv_len = 8} }; + +static const odp_crypto_cipher_capability_t cipher_capa_trides_cbc[] = { +{.key_len = 24, .iv_len = 8} }; + +static const odp_crypto_cipher_capability_t cipher_capa_aes_cbc[] = { +{.key_len = 16, .iv_len = 16} }; + +static const odp_crypto_cipher_capability_t cipher_capa_aes_gcm[] = { +{.key_len = 16, .iv_len = 12} }; + +/* + * Authentication algorithm capabilities + * + * Keep sorted: first by digest length, then by key length + */ +static const odp_crypto_auth_capability_t auth_capa_md5_hmac[] = { +{.digest_len = 12, .key_len = 16, .aad_len = {.min = 0, .max = 0, .inc = 0} } }; + +static const odp_crypto_auth_capability_t auth_capa_sha256_hmac[] = { +{.digest_len = 16, .key_len = 32, .aad_len = {.min = 0, .max = 0, .inc = 0} } }; + +static const odp_crypto_auth_capability_t auth_capa_aes_gcm[] = { +{.digest_len = 16, .key_len = 0, .aad_len = {.min = 8, .max = 12, .inc = 4} } }; + typedef struct odp_crypto_global_s odp_crypto_global_t;
struct odp_crypto_global_s { @@ -623,6 +654,83 @@ int odp_crypto_capability(odp_crypto_capability_t *capa) return 0; }
+int odp_crypto_cipher_capability(odp_cipher_alg_t cipher, + odp_crypto_cipher_capability_t dst[], + int num_copy) +{ + const odp_crypto_cipher_capability_t *src; + int num; + int size = sizeof(odp_crypto_cipher_capability_t); + + switch (cipher) { + case ODP_CIPHER_ALG_NULL: + src = NULL; + num = 0; + break; + case ODP_CIPHER_ALG_DES: + src = cipher_capa_des; + num = sizeof(cipher_capa_des) / size; + break; + case ODP_CIPHER_ALG_3DES_CBC: + src = cipher_capa_trides_cbc; + num = sizeof(cipher_capa_trides_cbc) / size; + break; + case ODP_CIPHER_ALG_AES_CBC: + src = cipher_capa_aes_cbc; + num = sizeof(cipher_capa_aes_cbc) / size; + break; + case ODP_CIPHER_ALG_AES_GCM: + src = cipher_capa_aes_gcm; + num = sizeof(cipher_capa_aes_gcm) / size; + break; + default: + return -1; + } + + if (num < num_copy) + num_copy = num; + + memcpy(dst, src, num_copy * size); + + return num; +} + +int odp_crypto_auth_capability(odp_auth_alg_t auth, + odp_crypto_auth_capability_t dst[], int num_copy) +{ + const odp_crypto_auth_capability_t *src; + int num; + int size = sizeof(odp_crypto_auth_capability_t); + + switch (auth) { + case ODP_AUTH_ALG_NULL: + src = NULL; + num = 0; + break; + case ODP_AUTH_ALG_MD5_HMAC: + src = auth_capa_md5_hmac; + num = sizeof(auth_capa_md5_hmac) / size; + break; + case ODP_AUTH_ALG_SHA256_HMAC: + src = auth_capa_sha256_hmac; + num = sizeof(auth_capa_sha256_hmac) / size; + break; + case ODP_AUTH_ALG_AES_GCM: + src = auth_capa_aes_gcm; + num = sizeof(auth_capa_aes_gcm) / size; + break; + default: + return -1; + } + + if (num < num_copy) + num_copy = num; + + memcpy(dst, src, num_copy * size); + + return num; +} + int odp_crypto_session_create(odp_crypto_session_param_t *param, odp_crypto_session_t *session_out,
commit 4194d93ab8095ef850e332a1433d8d810b7418a1 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:05:24 2016 +0200
api: crypto: decouple key length from algorithm enumeration
Enumerations for cipher and authentication algorithms grow fast if key and digest lengths are included into the enum. Decoupled lengths from algorithm names, only exception is SHA-2 family of authentication algorithms which has established naming convention with digest lengths (SHA-224, SHA-256, ...). Old enumerations are still functional but deprecated.
Algotrithm level capability functions provide a flexible way to handle all possible key/digest/iv length combinations.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/crypto.h b/include/odp/api/spec/crypto.h index f24f527..4b94824 100644 --- a/include/odp/api/spec/crypto.h +++ b/include/odp/api/spec/crypto.h @@ -65,14 +65,28 @@ typedef enum { typedef enum { /** No cipher algorithm specified */ ODP_CIPHER_ALG_NULL, + /** DES */ ODP_CIPHER_ALG_DES, + /** Triple DES with cipher block chaining */ ODP_CIPHER_ALG_3DES_CBC, - /** AES128 with cipher block chaining */ + + /** AES with cipher block chaining */ + ODP_CIPHER_ALG_AES_CBC, + + /** AES in Galois/Counter Mode + * + * @note Must be paired with cipher ODP_AUTH_ALG_AES_GCM + */ + ODP_CIPHER_ALG_AES_GCM, + + /** @deprecated Use ODP_CIPHER_ALG_AES_CBC instead */ ODP_CIPHER_ALG_AES128_CBC, - /** AES128 in Galois/Counter Mode */ - ODP_CIPHER_ALG_AES128_GCM, + + /** @deprecated Use ODP_CIPHER_ALG_AES_GCM instead */ + ODP_CIPHER_ALG_AES128_GCM + } odp_cipher_alg_t;
/** @@ -81,12 +95,33 @@ typedef enum { typedef enum { /** No authentication algorithm specified */ ODP_AUTH_ALG_NULL, - /** HMAC-MD5 with 96 bit key */ + + /** HMAC-MD5 + * + * MD5 algorithm in HMAC mode + */ + ODP_AUTH_ALG_MD5_HMAC, + + /** HMAC-SHA-256 + * + * SHA-256 algorithm in HMAC mode + */ + ODP_AUTH_ALG_SHA256_HMAC, + + /** AES in Galois/Counter Mode + * + * @note Must be paired with cipher ODP_CIPHER_ALG_AES_GCM + */ + ODP_AUTH_ALG_AES_GCM, + + /** @deprecated Use ODP_AUTH_ALG_MD5_HMAC instead */ ODP_AUTH_ALG_MD5_96, - /** SHA256 with 128 bit key */ + + /** @deprecated Use ODP_AUTH_ALG_SHA256_HMAC instead */ ODP_AUTH_ALG_SHA256_128, - /** AES128 in Galois/Counter Mode */ - ODP_AUTH_ALG_AES128_GCM, + + /** @deprecated Use ODP_AUTH_ALG_AES_GCM instead */ + ODP_AUTH_ALG_AES128_GCM } odp_auth_alg_t;
/** @@ -96,19 +131,25 @@ typedef union odp_crypto_cipher_algos_t { /** Cipher algorithms */ struct { /** ODP_CIPHER_ALG_NULL */ - uint32_t null : 1; + uint32_t null : 1;
/** ODP_CIPHER_ALG_DES */ - uint32_t des : 1; + uint32_t des : 1;
/** ODP_CIPHER_ALG_3DES_CBC */ - uint32_t trides_cbc : 1; + uint32_t trides_cbc : 1; + + /** ODP_CIPHER_ALG_AES_CBC */ + uint32_t aes_cbc : 1;
- /** ODP_CIPHER_ALG_AES128_CBC */ - uint32_t aes128_cbc : 1; + /** ODP_CIPHER_ALG_AES_GCM */ + uint32_t aes_gcm : 1;
- /** ODP_CIPHER_ALG_AES128_GCM */ - uint32_t aes128_gcm : 1; + /** @deprecated Use aes_cbc instead */ + uint32_t aes128_cbc : 1; + + /** @deprecated Use aes_gcm instead */ + uint32_t aes128_gcm : 1; } bit;
/** All bits of the bit field structure @@ -125,16 +166,25 @@ typedef union odp_crypto_auth_algos_t { /** Authentication algorithms */ struct { /** ODP_AUTH_ALG_NULL */ - uint32_t null : 1; + uint32_t null : 1; + + /** ODP_AUTH_ALG_MD5_HMAC */ + uint32_t md5_hmac : 1; + + /** ODP_AUTH_ALG_SHA256_HMAC */ + uint32_t sha256_hmac : 1;
- /** ODP_AUTH_ALG_MD5_96 */ - uint32_t md5_96 : 1; + /** ODP_AUTH_ALG_AES_GCM */ + uint32_t aes_gcm : 1;
- /** ODP_AUTH_ALG_SHA256_128 */ - uint32_t sha256_128 : 1; + /** @deprecated Use md5_hmac instead */ + uint32_t md5_96 : 1;
- /** ODP_AUTH_ALG_AES128_GCM */ - uint32_t aes128_gcm : 1; + /** @deprecated Use sha256_hmac instead */ + uint32_t sha256_128 : 1; + + /** @deprecated Use aes_gcm instead */ + uint32_t aes128_gcm : 1; } bit;
/** All bits of the bit field structure @@ -341,6 +391,43 @@ typedef struct odp_crypto_capability_t { } odp_crypto_capability_t;
/** + * Cipher algorithm capabilities + */ +typedef struct odp_crypto_cipher_capability_t { + /** Key length in bytes */ + uint32_t key_len; + + /** IV length in bytes */ + uint32_t iv_len; + +} odp_crypto_cipher_capability_t; + +/** + * Authentication algorithm capabilities + */ +typedef struct odp_crypto_auth_capability_t { + /** Digest length in bytes */ + uint32_t digest_len; + + /** Key length in bytes */ + uint32_t key_len; + + /** Additional Authenticated Data (AAD) lengths */ + struct { + /** Minimum AAD length in bytes */ + uint32_t min; + + /** Maximum AAD length in bytes */ + uint32_t max; + + /** Increment of supported lengths between min and max + * (in bytes) */ + uint32_t inc; + } aad_len; + +} odp_crypto_auth_capability_t; + +/** * Query crypto capabilities * * Outputs crypto capabilities on success. @@ -353,6 +440,45 @@ typedef struct odp_crypto_capability_t { int odp_crypto_capability(odp_crypto_capability_t *capa);
/** + * Query supported cipher algorithm capabilities + * + * Outputs all supported configuration options for the algorithm. Output is + * sorted (from the smallest to the largest) first by key length, then by IV + * length. + * + * @param cipher Cipher algorithm + * @param[out] capa Array of capability structures for output + * @param num Maximum number of capability structures to output + * + * @return Number of capability structures for the algorithm. If this is larger + * than 'num', only 'num' first structures were output and application + * may call the function again with a larger value of 'num'. + * @retval <0 on failure + */ +int odp_crypto_cipher_capability(odp_cipher_alg_t cipher, + odp_crypto_cipher_capability_t capa[], + int num); + +/** + * Query supported authentication algorithm capabilities + * + * Outputs all supported configuration options for the algorithm. Output is + * sorted (from the smallest to the largest) first by digest length, then by key + * length. + * + * @param auth Authentication algorithm + * @param[out] capa Array of capability structures for output + * @param num Maximum number of capability structures to output + * + * @return Number of capability structures for the algorithm. If this is larger + * than 'num', only 'num' first structures were output and application + * may call the function again with a larger value of 'num'. + * @retval <0 on failure + */ +int odp_crypto_auth_capability(odp_auth_alg_t auth, + odp_crypto_auth_capability_t capa[], int num); + +/** * Crypto session creation (synchronous) * * @param param Session parameters
commit f723bcef66945acb0738acc8a40b8ebd5851b84d Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:05:23 2016 +0200
linux-gen: crypto: rename params to param
Use new _param_t type names instead of _params_t.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_crypto_internal.h b/platform/linux-generic/include/odp_crypto_internal.h index 7b104af..7b4eb61 100644 --- a/platform/linux-generic/include/odp_crypto_internal.h +++ b/platform/linux-generic/include/odp_crypto_internal.h @@ -23,7 +23,7 @@ typedef struct odp_crypto_generic_session odp_crypto_generic_session_t; * Algorithm handler function prototype */ typedef -odp_crypto_alg_err_t (*crypto_func_t)(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t (*crypto_func_t)(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session);
/** diff --git a/platform/linux-generic/odp_crypto.c b/platform/linux-generic/odp_crypto.c index 7e686ff..70d3a97 100644 --- a/platform/linux-generic/odp_crypto.c +++ b/platform/linux-generic/odp_crypto.c @@ -69,24 +69,24 @@ void free_session(odp_crypto_generic_session_t *session) }
static odp_crypto_alg_err_t -null_crypto_routine(odp_crypto_op_params_t *params ODP_UNUSED, +null_crypto_routine(odp_crypto_op_param_t *param ODP_UNUSED, odp_crypto_generic_session_t *session ODP_UNUSED) { return ODP_CRYPTO_ALG_ERR_NONE; }
static -odp_crypto_alg_err_t md5_gen(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t md5_gen(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); + uint8_t *data = odp_packet_data(param->out_pkt); uint8_t *icv = data; - uint32_t len = params->auth_range.length; + uint32_t len = param->auth_range.length; uint8_t hash[EVP_MAX_MD_SIZE];
/* Adjust pointer for beginning of area to auth */ - data += params->auth_range.offset; - icv += params->hash_result_offset; + data += param->auth_range.offset; + icv += param->hash_result_offset;
/* Hash it */ HMAC(EVP_md5(), @@ -104,19 +104,19 @@ odp_crypto_alg_err_t md5_gen(odp_crypto_op_params_t *params, }
static -odp_crypto_alg_err_t md5_check(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t md5_check(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); + uint8_t *data = odp_packet_data(param->out_pkt); uint8_t *icv = data; - uint32_t len = params->auth_range.length; + uint32_t len = param->auth_range.length; uint32_t bytes = session->auth.data.md5.bytes; uint8_t hash_in[EVP_MAX_MD_SIZE]; uint8_t hash_out[EVP_MAX_MD_SIZE];
/* Adjust pointer for beginning of area to auth */ - data += params->auth_range.offset; - icv += params->hash_result_offset; + data += param->auth_range.offset; + icv += param->hash_result_offset;
/* Copy current value out and clear it before authentication */ memset(hash_in, 0, sizeof(hash_in)); @@ -142,17 +142,17 @@ odp_crypto_alg_err_t md5_check(odp_crypto_op_params_t *params, }
static -odp_crypto_alg_err_t sha256_gen(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t sha256_gen(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); + uint8_t *data = odp_packet_data(param->out_pkt); uint8_t *icv = data; - uint32_t len = params->auth_range.length; + uint32_t len = param->auth_range.length; uint8_t hash[EVP_MAX_MD_SIZE];
/* Adjust pointer for beginning of area to auth */ - data += params->auth_range.offset; - icv += params->hash_result_offset; + data += param->auth_range.offset; + icv += param->hash_result_offset;
/* Hash it */ HMAC(EVP_sha256(), @@ -170,19 +170,19 @@ odp_crypto_alg_err_t sha256_gen(odp_crypto_op_params_t *params, }
static -odp_crypto_alg_err_t sha256_check(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t sha256_check(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); + uint8_t *data = odp_packet_data(param->out_pkt); uint8_t *icv = data; - uint32_t len = params->auth_range.length; + uint32_t len = param->auth_range.length; uint32_t bytes = session->auth.data.sha256.bytes; uint8_t hash_in[EVP_MAX_MD_SIZE]; uint8_t hash_out[EVP_MAX_MD_SIZE];
/* Adjust pointer for beginning of area to auth */ - data += params->auth_range.offset; - icv += params->hash_result_offset; + data += param->auth_range.offset; + icv += param->hash_result_offset;
/* Copy current value out and clear it before authentication */ memset(hash_in, 0, sizeof(hash_in)); @@ -208,16 +208,16 @@ odp_crypto_alg_err_t sha256_check(odp_crypto_op_params_t *params, }
static -odp_crypto_alg_err_t aes_encrypt(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t aes_encrypt(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); - uint32_t len = params->cipher_range.length; + uint8_t *data = odp_packet_data(param->out_pkt); + uint32_t len = param->cipher_range.length; unsigned char iv_enc[AES_BLOCK_SIZE]; void *iv_ptr;
- if (params->override_iv_ptr) - iv_ptr = params->override_iv_ptr; + if (param->override_iv_ptr) + iv_ptr = param->override_iv_ptr; else if (session->cipher.iv.data) iv_ptr = session->cipher.iv.data; else @@ -231,7 +231,7 @@ odp_crypto_alg_err_t aes_encrypt(odp_crypto_op_params_t *params, memcpy(iv_enc, iv_ptr, AES_BLOCK_SIZE);
/* Adjust pointer for beginning of area to cipher */ - data += params->cipher_range.offset; + data += param->cipher_range.offset; /* Encrypt it */ AES_cbc_encrypt(data, data, len, &session->cipher.data.aes.key, iv_enc, AES_ENCRYPT); @@ -240,16 +240,16 @@ odp_crypto_alg_err_t aes_encrypt(odp_crypto_op_params_t *params, }
static -odp_crypto_alg_err_t aes_decrypt(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t aes_decrypt(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); - uint32_t len = params->cipher_range.length; + uint8_t *data = odp_packet_data(param->out_pkt); + uint32_t len = param->cipher_range.length; unsigned char iv_enc[AES_BLOCK_SIZE]; void *iv_ptr;
- if (params->override_iv_ptr) - iv_ptr = params->override_iv_ptr; + if (param->override_iv_ptr) + iv_ptr = param->override_iv_ptr; else if (session->cipher.iv.data) iv_ptr = session->cipher.iv.data; else @@ -263,7 +263,7 @@ odp_crypto_alg_err_t aes_decrypt(odp_crypto_op_params_t *params, memcpy(iv_enc, iv_ptr, AES_BLOCK_SIZE);
/* Adjust pointer for beginning of area to cipher */ - data += params->cipher_range.offset; + data += param->cipher_range.offset; /* Encrypt it */ AES_cbc_encrypt(data, data, len, &session->cipher.data.aes.key, iv_enc, AES_DECRYPT); @@ -272,21 +272,21 @@ odp_crypto_alg_err_t aes_decrypt(odp_crypto_op_params_t *params, }
static -int process_aes_params(odp_crypto_generic_session_t *session, - odp_crypto_session_params_t *params) +int process_aes_param(odp_crypto_generic_session_t *session, + odp_crypto_session_param_t *param) { /* Verify IV len is either 0 or 16 */ - if (!((0 == params->iv.length) || (16 == params->iv.length))) + if (!((0 == param->iv.length) || (16 == param->iv.length))) return -1;
/* Set function */ - if (ODP_CRYPTO_OP_ENCODE == params->op) { + if (ODP_CRYPTO_OP_ENCODE == param->op) { session->cipher.func = aes_encrypt; - AES_set_encrypt_key(params->cipher_key.data, 128, + AES_set_encrypt_key(param->cipher_key.data, 128, &session->cipher.data.aes.key); } else { session->cipher.func = aes_decrypt; - AES_set_decrypt_key(params->cipher_key.data, 128, + AES_set_decrypt_key(param->cipher_key.data, 128, &session->cipher.data.aes.key); }
@@ -294,30 +294,30 @@ int process_aes_params(odp_crypto_generic_session_t *session, }
static -odp_crypto_alg_err_t aes_gcm_encrypt(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t aes_gcm_encrypt(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); - uint32_t plain_len = params->cipher_range.length; - uint8_t *aad_head = data + params->auth_range.offset; - uint8_t *aad_tail = data + params->cipher_range.offset + - params->cipher_range.length; - uint32_t auth_len = params->auth_range.length; + uint8_t *data = odp_packet_data(param->out_pkt); + uint32_t plain_len = param->cipher_range.length; + uint8_t *aad_head = data + param->auth_range.offset; + uint8_t *aad_tail = data + param->cipher_range.offset + + param->cipher_range.length; + uint32_t auth_len = param->auth_range.length; unsigned char iv_enc[AES_BLOCK_SIZE]; void *iv_ptr; - uint8_t *tag = data + params->hash_result_offset; + uint8_t *tag = data + param->hash_result_offset;
- if (params->override_iv_ptr) - iv_ptr = params->override_iv_ptr; + if (param->override_iv_ptr) + iv_ptr = param->override_iv_ptr; else if (session->cipher.iv.data) iv_ptr = session->cipher.iv.data; else return ODP_CRYPTO_ALG_ERR_IV_INVALID;
/* All cipher data must be part of the authentication */ - if (params->auth_range.offset > params->cipher_range.offset || - params->auth_range.offset + auth_len < - params->cipher_range.offset + plain_len) + if (param->auth_range.offset > param->cipher_range.offset || + param->auth_range.offset + auth_len < + param->cipher_range.offset + plain_len) return ODP_CRYPTO_ALG_ERR_DATA_SIZE;
/* @@ -328,7 +328,7 @@ odp_crypto_alg_err_t aes_gcm_encrypt(odp_crypto_op_params_t *params, memcpy(iv_enc, iv_ptr, AES_BLOCK_SIZE);
/* Adjust pointer for beginning of area to cipher/auth */ - uint8_t *plaindata = data + params->cipher_range.offset; + uint8_t *plaindata = data + param->cipher_range.offset;
/* Encrypt it */ EVP_CIPHER_CTX *ctx = session->cipher.data.aes_gcm.ctx; @@ -359,30 +359,30 @@ odp_crypto_alg_err_t aes_gcm_encrypt(odp_crypto_op_params_t *params, }
static -odp_crypto_alg_err_t aes_gcm_decrypt(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t aes_gcm_decrypt(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); - uint32_t cipher_len = params->cipher_range.length; - uint8_t *aad_head = data + params->auth_range.offset; - uint8_t *aad_tail = data + params->cipher_range.offset + - params->cipher_range.length; - uint32_t auth_len = params->auth_range.length; + uint8_t *data = odp_packet_data(param->out_pkt); + uint32_t cipher_len = param->cipher_range.length; + uint8_t *aad_head = data + param->auth_range.offset; + uint8_t *aad_tail = data + param->cipher_range.offset + + param->cipher_range.length; + uint32_t auth_len = param->auth_range.length; unsigned char iv_enc[AES_BLOCK_SIZE]; void *iv_ptr; - uint8_t *tag = data + params->hash_result_offset; + uint8_t *tag = data + param->hash_result_offset;
- if (params->override_iv_ptr) - iv_ptr = params->override_iv_ptr; + if (param->override_iv_ptr) + iv_ptr = param->override_iv_ptr; else if (session->cipher.iv.data) iv_ptr = session->cipher.iv.data; else return ODP_CRYPTO_ALG_ERR_IV_INVALID;
/* All cipher data must be part of the authentication */ - if (params->auth_range.offset > params->cipher_range.offset || - params->auth_range.offset + auth_len < - params->cipher_range.offset + cipher_len) + if (param->auth_range.offset > param->cipher_range.offset || + param->auth_range.offset + auth_len < + param->cipher_range.offset + cipher_len) return ODP_CRYPTO_ALG_ERR_DATA_SIZE;
/* @@ -393,7 +393,7 @@ odp_crypto_alg_err_t aes_gcm_decrypt(odp_crypto_op_params_t *params, memcpy(iv_enc, iv_ptr, AES_BLOCK_SIZE);
/* Adjust pointer for beginning of area to cipher/auth */ - uint8_t *cipherdata = data + params->cipher_range.offset; + uint8_t *cipherdata = data + param->cipher_range.offset; /* Encrypt it */ EVP_CIPHER_CTX *ctx = session->cipher.data.aes_gcm.ctx; int plain_len = 0; @@ -425,18 +425,18 @@ odp_crypto_alg_err_t aes_gcm_decrypt(odp_crypto_op_params_t *params, }
static -int process_aes_gcm_params(odp_crypto_generic_session_t *session, - odp_crypto_session_params_t *params) +int process_aes_gcm_param(odp_crypto_generic_session_t *session, + odp_crypto_session_param_t *param) { /* Verify Key len is 16 */ - if (params->cipher_key.length != 16) + if (param->cipher_key.length != 16) return -1;
/* Set function */ EVP_CIPHER_CTX *ctx = session->cipher.data.aes_gcm.ctx = EVP_CIPHER_CTX_new();
- if (ODP_CRYPTO_OP_ENCODE == params->op) { + if (ODP_CRYPTO_OP_ENCODE == param->op) { session->cipher.func = aes_gcm_encrypt; EVP_EncryptInit_ex(ctx, EVP_aes_128_gcm(), NULL, NULL, NULL); } else { @@ -445,29 +445,29 @@ int process_aes_gcm_params(odp_crypto_generic_session_t *session, }
EVP_CIPHER_CTX_ctrl(ctx, EVP_CTRL_GCM_SET_IVLEN, - params->iv.length, NULL); - if (ODP_CRYPTO_OP_ENCODE == params->op) { + param->iv.length, NULL); + if (ODP_CRYPTO_OP_ENCODE == param->op) { EVP_EncryptInit_ex(ctx, NULL, NULL, - params->cipher_key.data, NULL); + param->cipher_key.data, NULL); } else { EVP_DecryptInit_ex(ctx, NULL, NULL, - params->cipher_key.data, NULL); + param->cipher_key.data, NULL); }
return 0; }
static -odp_crypto_alg_err_t des_encrypt(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t des_encrypt(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); - uint32_t len = params->cipher_range.length; + uint8_t *data = odp_packet_data(param->out_pkt); + uint32_t len = param->cipher_range.length; DES_cblock iv; void *iv_ptr;
- if (params->override_iv_ptr) - iv_ptr = params->override_iv_ptr; + if (param->override_iv_ptr) + iv_ptr = param->override_iv_ptr; else if (session->cipher.iv.data) iv_ptr = session->cipher.iv.data; else @@ -481,7 +481,7 @@ odp_crypto_alg_err_t des_encrypt(odp_crypto_op_params_t *params, memcpy(iv, iv_ptr, sizeof(iv));
/* Adjust pointer for beginning of area to cipher */ - data += params->cipher_range.offset; + data += param->cipher_range.offset; /* Encrypt it */ DES_ede3_cbc_encrypt(data, data, @@ -496,16 +496,16 @@ odp_crypto_alg_err_t des_encrypt(odp_crypto_op_params_t *params, }
static -odp_crypto_alg_err_t des_decrypt(odp_crypto_op_params_t *params, +odp_crypto_alg_err_t des_decrypt(odp_crypto_op_param_t *param, odp_crypto_generic_session_t *session) { - uint8_t *data = odp_packet_data(params->out_pkt); - uint32_t len = params->cipher_range.length; + uint8_t *data = odp_packet_data(param->out_pkt); + uint32_t len = param->cipher_range.length; DES_cblock iv; void *iv_ptr;
- if (params->override_iv_ptr) - iv_ptr = params->override_iv_ptr; + if (param->override_iv_ptr) + iv_ptr = param->override_iv_ptr; else if (session->cipher.iv.data) iv_ptr = session->cipher.iv.data; else @@ -519,7 +519,7 @@ odp_crypto_alg_err_t des_decrypt(odp_crypto_op_params_t *params, memcpy(iv, iv_ptr, sizeof(iv));
/* Adjust pointer for beginning of area to cipher */ - data += params->cipher_range.offset; + data += param->cipher_range.offset;
/* Decrypt it */ DES_ede3_cbc_encrypt(data, @@ -535,37 +535,37 @@ odp_crypto_alg_err_t des_decrypt(odp_crypto_op_params_t *params, }
static -int process_des_params(odp_crypto_generic_session_t *session, - odp_crypto_session_params_t *params) +int process_des_param(odp_crypto_generic_session_t *session, + odp_crypto_session_param_t *param) { /* Verify IV len is either 0 or 8 */ - if (!((0 == params->iv.length) || (8 == params->iv.length))) + if (!((0 == param->iv.length) || (8 == param->iv.length))) return -1;
/* Set function */ - if (ODP_CRYPTO_OP_ENCODE == params->op) + if (ODP_CRYPTO_OP_ENCODE == param->op) session->cipher.func = des_encrypt; else session->cipher.func = des_decrypt;
/* Convert keys */ - DES_set_key((DES_cblock *)¶ms->cipher_key.data[0], + DES_set_key((DES_cblock *)¶m->cipher_key.data[0], &session->cipher.data.des.ks1); - DES_set_key((DES_cblock *)¶ms->cipher_key.data[8], + DES_set_key((DES_cblock *)¶m->cipher_key.data[8], &session->cipher.data.des.ks2); - DES_set_key((DES_cblock *)¶ms->cipher_key.data[16], + DES_set_key((DES_cblock *)¶m->cipher_key.data[16], &session->cipher.data.des.ks3);
return 0; }
static -int process_md5_params(odp_crypto_generic_session_t *session, - odp_crypto_session_params_t *params, - uint32_t bits) +int process_md5_param(odp_crypto_generic_session_t *session, + odp_crypto_session_param_t *param, + uint32_t bits) { /* Set function */ - if (ODP_CRYPTO_OP_ENCODE == params->op) + if (ODP_CRYPTO_OP_ENCODE == param->op) session->auth.func = md5_gen; else session->auth.func = md5_check; @@ -574,18 +574,18 @@ int process_md5_params(odp_crypto_generic_session_t *session, session->auth.data.md5.bytes = bits / 8;
/* Convert keys */ - memcpy(session->auth.data.md5.key, params->auth_key.data, 16); + memcpy(session->auth.data.md5.key, param->auth_key.data, 16);
return 0; }
static -int process_sha256_params(odp_crypto_generic_session_t *session, - odp_crypto_session_params_t *params, - uint32_t bits) +int process_sha256_param(odp_crypto_generic_session_t *session, + odp_crypto_session_param_t *param, + uint32_t bits) { /* Set function */ - if (ODP_CRYPTO_OP_ENCODE == params->op) + if (ODP_CRYPTO_OP_ENCODE == param->op) session->auth.func = sha256_gen; else session->auth.func = sha256_check; @@ -594,7 +594,7 @@ int process_sha256_params(odp_crypto_generic_session_t *session, session->auth.data.sha256.bytes = bits / 8;
/* Convert keys */ - memcpy(session->auth.data.sha256.key, params->auth_key.data, 32); + memcpy(session->auth.data.sha256.key, param->auth_key.data, 32);
return 0; } @@ -624,7 +624,7 @@ int odp_crypto_capability(odp_crypto_capability_t *capa) }
int -odp_crypto_session_create(odp_crypto_session_params_t *params, +odp_crypto_session_create(odp_crypto_session_param_t *param, odp_crypto_session_t *session_out, odp_crypto_ses_create_err_t *status) { @@ -642,41 +642,41 @@ odp_crypto_session_create(odp_crypto_session_params_t *params, }
/* Derive order */ - if (ODP_CRYPTO_OP_ENCODE == params->op) - session->do_cipher_first = params->auth_cipher_text; + if (ODP_CRYPTO_OP_ENCODE == param->op) + session->do_cipher_first = param->auth_cipher_text; else - session->do_cipher_first = !params->auth_cipher_text; + session->do_cipher_first = !param->auth_cipher_text;
/* Copy stuff over */ - session->op = params->op; - session->compl_queue = params->compl_queue; - session->cipher.alg = params->cipher_alg; - session->cipher.iv.data = params->iv.data; - session->cipher.iv.len = params->iv.length; - session->auth.alg = params->auth_alg; - session->output_pool = params->output_pool; + session->op = param->op; + session->compl_queue = param->compl_queue; + session->cipher.alg = param->cipher_alg; + session->cipher.iv.data = param->iv.data; + session->cipher.iv.len = param->iv.length; + session->auth.alg = param->auth_alg; + session->output_pool = param->output_pool;
/* Process based on cipher */ - switch (params->cipher_alg) { + switch (param->cipher_alg) { case ODP_CIPHER_ALG_NULL: session->cipher.func = null_crypto_routine; rc = 0; break; case ODP_CIPHER_ALG_DES: case ODP_CIPHER_ALG_3DES_CBC: - rc = process_des_params(session, params); + rc = process_des_param(session, param); break; case ODP_CIPHER_ALG_AES128_CBC: - rc = process_aes_params(session, params); + rc = process_aes_param(session, param); break; case ODP_CIPHER_ALG_AES128_GCM: /* AES-GCM requires to do both auth and * cipher at the same time */ - if (params->auth_alg != ODP_AUTH_ALG_AES128_GCM) { + if (param->auth_alg != ODP_AUTH_ALG_AES128_GCM) { rc = -1; break; } - rc = process_aes_gcm_params(session, params); + rc = process_aes_gcm_param(session, param); break; default: rc = -1; @@ -689,21 +689,21 @@ odp_crypto_session_create(odp_crypto_session_params_t *params, }
/* Process based on auth */ - switch (params->auth_alg) { + switch (param->auth_alg) { case ODP_AUTH_ALG_NULL: session->auth.func = null_crypto_routine; rc = 0; break; case ODP_AUTH_ALG_MD5_96: - rc = process_md5_params(session, params, 96); + rc = process_md5_param(session, param, 96); break; case ODP_AUTH_ALG_SHA256_128: - rc = process_sha256_params(session, params, 128); + rc = process_sha256_param(session, param, 128); break; case ODP_AUTH_ALG_AES128_GCM: /* AES-GCM requires to do both auth and * cipher at the same time */ - if (params->cipher_alg != ODP_CIPHER_ALG_AES128_GCM) { + if (param->cipher_alg != ODP_CIPHER_ALG_AES128_GCM) { rc = -1; break; } @@ -738,7 +738,7 @@ int odp_crypto_session_destroy(odp_crypto_session_t session) }
int -odp_crypto_operation(odp_crypto_op_params_t *params, +odp_crypto_operation(odp_crypto_op_param_t *param, odp_bool_t *posted, odp_crypto_op_result_t *result) { @@ -747,42 +747,42 @@ odp_crypto_operation(odp_crypto_op_params_t *params, odp_crypto_generic_session_t *session; odp_crypto_op_result_t local_result;
- session = (odp_crypto_generic_session_t *)(intptr_t)params->session; + session = (odp_crypto_generic_session_t *)(intptr_t)param->session;
/* Resolve output buffer */ - if (ODP_PACKET_INVALID == params->out_pkt && + if (ODP_PACKET_INVALID == param->out_pkt && ODP_POOL_INVALID != session->output_pool) - params->out_pkt = odp_packet_alloc(session->output_pool, - odp_packet_len(params->pkt)); + param->out_pkt = odp_packet_alloc(session->output_pool, + odp_packet_len(param->pkt));
- if (odp_unlikely(ODP_PACKET_INVALID == params->out_pkt)) { + if (odp_unlikely(ODP_PACKET_INVALID == param->out_pkt)) { ODP_DBG("Alloc failed.\n"); return -1; }
- if (params->pkt != params->out_pkt) { - (void)odp_packet_copy_from_pkt(params->out_pkt, + if (param->pkt != param->out_pkt) { + (void)odp_packet_copy_from_pkt(param->out_pkt, 0, - params->pkt, + param->pkt, 0, - odp_packet_len(params->pkt)); - _odp_packet_copy_md_to_packet(params->pkt, params->out_pkt); - odp_packet_free(params->pkt); - params->pkt = ODP_PACKET_INVALID; + odp_packet_len(param->pkt)); + _odp_packet_copy_md_to_packet(param->pkt, param->out_pkt); + odp_packet_free(param->pkt); + param->pkt = ODP_PACKET_INVALID; }
/* Invoke the functions */ if (session->do_cipher_first) { - rc_cipher = session->cipher.func(params, session); - rc_auth = session->auth.func(params, session); + rc_cipher = session->cipher.func(param, session); + rc_auth = session->auth.func(param, session); } else { - rc_auth = session->auth.func(params, session); - rc_cipher = session->cipher.func(params, session); + rc_auth = session->auth.func(param, session); + rc_cipher = session->cipher.func(param, session); }
/* Fill in result */ - local_result.ctx = params->ctx; - local_result.pkt = params->out_pkt; + local_result.ctx = param->ctx; + local_result.pkt = param->out_pkt; local_result.cipher_status.alg_err = rc_cipher; local_result.cipher_status.hw_err = ODP_CRYPTO_HW_ERR_NONE; local_result.auth_status.alg_err = rc_auth; @@ -797,7 +797,7 @@ odp_crypto_operation(odp_crypto_op_params_t *params, odp_crypto_generic_op_result_t *op_result;
/* Linux generic will always use packet for completion event */ - completion_event = odp_packet_to_event(params->out_pkt); + completion_event = odp_packet_to_event(param->out_pkt); _odp_buffer_event_type_set( odp_buffer_from_event(completion_event), ODP_EVENT_CRYPTO_COMPL);
commit fa4063b4104784bdc1c20fe3b519716e4413c245 Author: Petri Savolainen petri.savolainen@nokia.com Date: Thu Dec 8 16:25:32 2016 +0200
api: crypto: rename _params_t to _param_t
The common naming convention for parameter types is _param_t (without 's'). Old type names remain for backwards compatibility, but are deprecated.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Balasubramanian Manoharan bala.manoharan@linaro.org Reviewed-by: Nikhil Agarwal nikhil.agarwal@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/crypto.h b/include/odp/api/spec/crypto.h index 0cb8814..f24f527 100644 --- a/include/odp/api/spec/crypto.h +++ b/include/odp/api/spec/crypto.h @@ -171,7 +171,7 @@ typedef struct odp_crypto_data_range { /** * Crypto API session creation parameters */ -typedef struct odp_crypto_session_params { +typedef struct odp_crypto_session_param_t { odp_crypto_op_t op; /**< Encode versus decode */ odp_bool_t auth_cipher_text; /**< Authenticate/cipher ordering */ odp_crypto_op_mode_t pref_mode; /**< Preferred sync vs async */ @@ -182,7 +182,10 @@ typedef struct odp_crypto_session_params { odp_crypto_key_t auth_key; /**< Authentication key */ odp_queue_t compl_queue; /**< Async mode completion event queue */ odp_pool_t output_pool; /**< Output buffer pool */ -} odp_crypto_session_params_t; +} odp_crypto_session_param_t; + +/** @deprecated Use odp_crypto_session_param_t instead */ +typedef odp_crypto_session_param_t odp_crypto_session_params_t;
/** * @var odp_crypto_session_params_t::auth_cipher_text @@ -209,7 +212,7 @@ typedef struct odp_crypto_session_params { /** * Crypto API per packet operation parameters */ -typedef struct odp_crypto_op_params { +typedef struct odp_crypto_op_param_t { odp_crypto_session_t session; /**< Session handle from creation */ void *ctx; /**< User context */ odp_packet_t pkt; /**< Input packet buffer */ @@ -218,7 +221,10 @@ typedef struct odp_crypto_op_params { uint32_t hash_result_offset; /**< Offset from start of packet buffer for hash result */ odp_crypto_data_range_t cipher_range; /**< Data range to apply cipher */ odp_crypto_data_range_t auth_range; /**< Data range to authenticate */ -} odp_crypto_op_params_t; +} odp_crypto_op_param_t; + +/** @deprecated Use odp_crypto_op_param_t instead */ +typedef odp_crypto_op_param_t odp_crypto_op_params_t;
/** * @var odp_crypto_op_params_t::pkt @@ -349,14 +355,14 @@ int odp_crypto_capability(odp_crypto_capability_t *capa); /** * Crypto session creation (synchronous) * - * @param params Session parameters + * @param param Session parameters * @param session Created session else ODP_CRYPTO_SESSION_INVALID * @param status Failure code if unsuccessful * * @retval 0 on success * @retval <0 on failure */ -int odp_crypto_session_create(odp_crypto_session_params_t *params, +int odp_crypto_session_create(odp_crypto_session_param_t *param, odp_crypto_session_t *session, odp_crypto_ses_create_err_t *status);
@@ -410,14 +416,14 @@ void odp_crypto_compl_free(odp_crypto_compl_t completion_event); * If "posted" returns TRUE the result will be delivered via the completion * queue specified when the session was created. * - * @param params Operation parameters + * @param param Operation parameters * @param posted Pointer to return posted, TRUE for async operation * @param result Results of operation (when posted returns FALSE) * * @retval 0 on success * @retval <0 on failure */ -int odp_crypto_operation(odp_crypto_op_params_t *params, +int odp_crypto_operation(odp_crypto_op_param_t *param, odp_bool_t *posted, odp_crypto_op_result_t *result);
commit 9c4d778148d514adf8586939123acdcdc022e8e5 Author: Christophe Milard christophe.milard@linaro.org Date: Fri Nov 25 15:39:33 2016 +0100
linux-gen: _fdserver: request sigterm if parent dies
_fdserver now request SIGTERM if parent process (ODP instantiation process) dies, hence avoiding it to become orphan and reattached to the init process.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-by: Mike Holmes mike.holmes@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_fdserver.c b/platform/linux-generic/_fdserver.c index 41a630b..9aed7a9 100644 --- a/platform/linux-generic/_fdserver.c +++ b/platform/linux-generic/_fdserver.c @@ -41,6 +41,8 @@ #include <odp_internal.h> #include <odp_debug_internal.h> #include <_fdserver_internal.h> +#include <sys/prctl.h> +#include <signal.h>
#include <stdio.h> #include <stdlib.h> @@ -622,6 +624,10 @@ int _odp_fdserver_init_global(void) /* TODO: pin the server on appropriate service cpu mask */ /* when (if) we can agree on the usage of service mask */
+ /* request to be killed if parent dies, hence avoiding */ + /* orphans being "adopted" by the init process... */ + prctl(PR_SET_PDEATHSIG, SIGTERM); + /* allocate the space for the file descriptor<->key table: */ fd_table = malloc(FDSERVER_MAX_ENTRIES * sizeof(fdentry_t)); if (!fd_table) {
commit b06ee329f944aeb7f3d03646aac384f88a00a7a5 Author: Matias Elo matias.elo@nokia.com Date: Fri Dec 2 12:56:28 2016 +0200
linux-gen: sched: new ordered lock implementation
Implement ordered locks using per lock atomic counters. The counter values are compared to the queue’s atomic context to guarantee ordered locking. Compared to the previous implementation this enables parallel processing of ordered events outside of the lock context.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index b905bd8..8b55de1 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -59,6 +59,8 @@ struct queue_entry_s { struct { odp_atomic_u64_t ctx; /**< Current ordered context id */ odp_atomic_u64_t next_ctx; /**< Next unallocated context id */ + /** Array of ordered locks */ + odp_atomic_u64_t lock[CONFIG_QUEUE_MAX_ORD_LOCKS]; } ordered ODP_ALIGNED_CACHE;
enq_func_t enqueue ODP_ALIGNED_CACHE; diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 4c7f497..d9cb9f3 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -77,8 +77,14 @@ static int queue_init(queue_entry_t *queue, const char *name, queue->s.param.deq_mode = ODP_QUEUE_OP_DISABLED;
if (param->sched.sync == ODP_SCHED_SYNC_ORDERED) { + unsigned i; + odp_atomic_init_u64(&queue->s.ordered.ctx, 0); odp_atomic_init_u64(&queue->s.ordered.next_ctx, 0); + + for (i = 0; i < queue->s.param.sched.lock_count; i++) + odp_atomic_init_u64(&queue->s.ordered.lock[i], + 0); } } queue->s.type = queue->s.param.type; diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 2ce90aa..645630a 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -126,6 +126,15 @@ typedef struct { int num; } ordered_stash_t;
+/* Ordered lock states */ +typedef union { + uint8_t u8[CONFIG_QUEUE_MAX_ORD_LOCKS]; + uint32_t all; +} lock_called_t; + +ODP_STATIC_ASSERT(sizeof(lock_called_t) == sizeof(uint32_t), + "Lock_called_values_do_not_fit_in_uint32"); + /* Scheduler local data */ typedef struct { int thr; @@ -143,6 +152,7 @@ typedef struct { uint64_t ctx; /**< Ordered context id */ int stash_num; /**< Number of stashed enqueue operations */ uint8_t in_order; /**< Order status */ + lock_called_t lock_called; /**< States of ordered locks */ /** Storage for stashed enqueue operations */ ordered_stash_t stash[MAX_ORDERED_STASH]; } ordered; @@ -553,12 +563,21 @@ static inline void ordered_stash_release(void)
static inline void release_ordered(void) { + unsigned i; queue_entry_t *queue;
queue = sched_local.ordered.src_queue;
wait_for_order(queue);
+ /* Release all ordered locks */ + for (i = 0; i < queue->s.param.sched.lock_count; i++) { + if (!sched_local.ordered.lock_called.u8[i]) + odp_atomic_store_rel_u64(&queue->s.ordered.lock[i], + sched_local.ordered.ctx + 1); + } + + sched_local.ordered.lock_called.all = 0; sched_local.ordered.src_queue = NULL; sched_local.ordered.in_order = 0;
@@ -923,19 +942,46 @@ static void order_unlock(void) { }
-static void schedule_order_lock(unsigned lock_index ODP_UNUSED) +static void schedule_order_lock(unsigned lock_index) { + odp_atomic_u64_t *ord_lock; queue_entry_t *queue;
queue = sched_local.ordered.src_queue;
- ODP_ASSERT(queue && lock_index <= queue->s.param.sched.lock_count); + ODP_ASSERT(queue && lock_index <= queue->s.param.sched.lock_count && + !sched_local.ordered.lock_called.u8[lock_index]);
- wait_for_order(queue); + ord_lock = &queue->s.ordered.lock[lock_index]; + + /* Busy loop to synchronize ordered processing */ + while (1) { + uint64_t lock_seq; + + lock_seq = odp_atomic_load_acq_u64(ord_lock); + + if (lock_seq == sched_local.ordered.ctx) { + sched_local.ordered.lock_called.u8[lock_index] = 1; + return; + } + odp_cpu_pause(); + } }
-static void schedule_order_unlock(unsigned lock_index ODP_UNUSED) +static void schedule_order_unlock(unsigned lock_index) { + odp_atomic_u64_t *ord_lock; + queue_entry_t *queue; + + queue = sched_local.ordered.src_queue; + + ODP_ASSERT(queue && lock_index <= queue->s.param.sched.lock_count); + + ord_lock = &queue->s.ordered.lock[lock_index]; + + ODP_ASSERT(sched_local.ordered.ctx == odp_atomic_load_u64(ord_lock)); + + odp_atomic_store_rel_u64(ord_lock, sched_local.ordered.ctx + 1); }
static void schedule_pause(void)
commit a2676059469f61f1ffd58090b74f4dd975d172ac Author: Matias Elo matias.elo@nokia.com Date: Fri Dec 2 12:56:27 2016 +0200
linux-gen: sched: new ordered queue implementation
Add new implementation for ordered queues. Compared to the old implementation this is much simpler and improves performance ~1-4x depending on the test case.
The implementation is based on an atomic ordered context, which only a single thread may possess at a time. Only the thread owning the atomic context may do enqueue(s) from the ordered queue. All other threads put their enqueued events to a thread local enqueue stash (ordered_stash_t). All stashed enqueue operations will be performed in the original order when the thread acquires the ordered context. If the ordered stash becomes full, the enqueue blocks. At the latest a thread blocks when the ev_stash is empty and the thread tries to release the order context.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index df36b76..b905bd8 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -56,6 +56,11 @@ struct queue_entry_s { odp_buffer_hdr_t *tail; int status;
+ struct { + odp_atomic_u64_t ctx; /**< Current ordered context id */ + odp_atomic_u64_t next_ctx; /**< Next unallocated context id */ + } ordered ODP_ALIGNED_CACHE; + enq_func_t enqueue ODP_ALIGNED_CACHE; deq_func_t dequeue; enq_multi_func_t enqueue_multi; diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 99c91e7..4c7f497 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -73,9 +73,14 @@ static int queue_init(queue_entry_t *queue, const char *name, if (queue->s.param.sched.lock_count > sched_fn->max_ordered_locks()) return -1;
- if (param->type == ODP_QUEUE_TYPE_SCHED) + if (param->type == ODP_QUEUE_TYPE_SCHED) { queue->s.param.deq_mode = ODP_QUEUE_OP_DISABLED;
+ if (param->sched.sync == ODP_SCHED_SYNC_ORDERED) { + odp_atomic_init_u64(&queue->s.ordered.ctx, 0); + odp_atomic_init_u64(&queue->s.ordered.next_ctx, 0); + } + } queue->s.type = queue->s.param.type;
queue->s.enqueue = queue_enq; @@ -301,6 +306,13 @@ int odp_queue_destroy(odp_queue_t handle) ODP_ERR("queue "%s" not empty\n", queue->s.name); return -1; } + if (queue_is_ordered(queue) && + odp_atomic_load_u64(&queue->s.ordered.ctx) != + odp_atomic_load_u64(&queue->s.ordered.next_ctx)) { + UNLOCK(&queue->s.lock); + ODP_ERR("queue "%s" reorder incomplete\n", queue->s.name); + return -1; + }
switch (queue->s.status) { case QUEUE_STATUS_READY: diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 5bc274f..2ce90aa 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -111,11 +111,21 @@ ODP_STATIC_ASSERT((8 * sizeof(pri_mask_t)) >= QUEUES_PER_PRIO, #define MAX_DEQ CONFIG_BURST_SIZE
/* Maximum number of ordered locks per queue */ -#define MAX_ORDERED_LOCKS_PER_QUEUE 1 +#define MAX_ORDERED_LOCKS_PER_QUEUE 2
ODP_STATIC_ASSERT(MAX_ORDERED_LOCKS_PER_QUEUE <= CONFIG_QUEUE_MAX_ORD_LOCKS, "Too_many_ordered_locks");
+/* Ordered stash size */ +#define MAX_ORDERED_STASH 512 + +/* Storage for stashed enqueue operation arguments */ +typedef struct { + odp_buffer_hdr_t *buf_hdr[QUEUE_MULTI_MAX]; + queue_entry_t *queue; + int num; +} ordered_stash_t; + /* Scheduler local data */ typedef struct { int thr; @@ -128,7 +138,15 @@ typedef struct { uint32_t queue_index; odp_queue_t queue; odp_event_t ev_stash[MAX_DEQ]; - void *queue_entry; + struct { + queue_entry_t *src_queue; /**< Source queue entry */ + uint64_t ctx; /**< Ordered context id */ + int stash_num; /**< Number of stashed enqueue operations */ + uint8_t in_order; /**< Order status */ + /** Storage for stashed enqueue operations */ + ordered_stash_t stash[MAX_ORDERED_STASH]; + } ordered; + } sched_local_t;
/* Priority queue */ @@ -491,17 +509,81 @@ static void schedule_release_atomic(void) } }
+static inline int ordered_own_turn(queue_entry_t *queue) +{ + uint64_t ctx; + + ctx = odp_atomic_load_acq_u64(&queue->s.ordered.ctx); + + return ctx == sched_local.ordered.ctx; +} + +static inline void wait_for_order(queue_entry_t *queue) +{ + /* Busy loop to synchronize ordered processing */ + while (1) { + if (ordered_own_turn(queue)) + break; + odp_cpu_pause(); + } +} + +/** + * Perform stashed enqueue operations + * + * Should be called only when already in order. + */ +static inline void ordered_stash_release(void) +{ + int i; + + for (i = 0; i < sched_local.ordered.stash_num; i++) { + queue_entry_t *queue; + odp_buffer_hdr_t **buf_hdr; + int num; + + queue = sched_local.ordered.stash[i].queue; + buf_hdr = sched_local.ordered.stash[i].buf_hdr; + num = sched_local.ordered.stash[i].num; + + queue_enq_multi(queue, buf_hdr, num); + } + sched_local.ordered.stash_num = 0; +} + +static inline void release_ordered(void) +{ + queue_entry_t *queue; + + queue = sched_local.ordered.src_queue; + + wait_for_order(queue); + + sched_local.ordered.src_queue = NULL; + sched_local.ordered.in_order = 0; + + ordered_stash_release(); + + /* Next thread can continue processing */ + odp_atomic_add_rel_u64(&queue->s.ordered.ctx, 1); +} + static void schedule_release_ordered(void) { - /* Process ordered queue as atomic */ - schedule_release_atomic(); - sched_local.queue_entry = NULL; + queue_entry_t *queue; + + queue = sched_local.ordered.src_queue; + + if (odp_unlikely(!queue || sched_local.num)) + return; + + release_ordered(); }
static inline void schedule_release_context(void) { - if (sched_local.queue_entry != NULL) - schedule_release_ordered(); + if (sched_local.ordered.src_queue != NULL) + release_ordered(); else schedule_release_atomic(); } @@ -524,13 +606,41 @@ static inline int copy_events(odp_event_t out_ev[], unsigned int max) static int schedule_ord_enq_multi(uint32_t queue_index, void *buf_hdr[], int num, int *ret) { - (void)queue_index; - (void)buf_hdr; - (void)num; - (void)ret; + int i; + uint32_t stash_num = sched_local.ordered.stash_num; + queue_entry_t *dst_queue = get_qentry(queue_index); + queue_entry_t *src_queue = sched_local.ordered.src_queue;
- /* didn't consume the events */ - return 0; + if (!sched_local.ordered.src_queue || sched_local.ordered.in_order) + return 0; + + if (ordered_own_turn(src_queue)) { + /* Own turn, so can do enqueue directly. */ + sched_local.ordered.in_order = 1; + ordered_stash_release(); + return 0; + } + + if (odp_unlikely(stash_num >= MAX_ORDERED_STASH)) { + /* If the local stash is full, wait until it is our turn and + * then release the stash and do enqueue directly. */ + wait_for_order(src_queue); + + sched_local.ordered.in_order = 1; + + ordered_stash_release(); + return 0; + } + + sched_local.ordered.stash[stash_num].queue = dst_queue; + sched_local.ordered.stash[stash_num].num = num; + for (i = 0; i < num; i++) + sched_local.ordered.stash[stash_num].buf_hdr[i] = buf_hdr[i]; + + sched_local.ordered.stash_num++; + + *ret = num; + return 1; }
/* @@ -658,9 +768,21 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[], ret = copy_events(out_ev, max_num);
if (ordered) { - /* Operate as atomic */ - sched_local.queue_index = qi; - sched_local.queue_entry = get_qentry(qi); + uint64_t ctx; + queue_entry_t *queue; + odp_atomic_u64_t *next_ctx; + + queue = get_qentry(qi); + next_ctx = &queue->s.ordered.next_ctx; + + ctx = odp_atomic_fetch_inc_u64(next_ctx); + + sched_local.ordered.ctx = ctx; + sched_local.ordered.src_queue = queue; + + /* Continue scheduling ordered queues */ + ring_enq(ring, PRIO_QUEUE_MASK, qi); + } else if (sched_cb_queue_is_atomic(qi)) { /* Hold queue during atomic access */ sched_local.queue_index = qi; @@ -785,8 +907,16 @@ static int schedule_multi(odp_queue_t *out_queue, uint64_t wait, return schedule_loop(out_queue, wait, events, num); }
-static void order_lock(void) +static inline void order_lock(void) { + queue_entry_t *queue; + + queue = sched_local.ordered.src_queue; + + if (!queue) + return; + + wait_for_order(queue); }
static void order_unlock(void) @@ -795,6 +925,13 @@ static void order_unlock(void)
static void schedule_order_lock(unsigned lock_index ODP_UNUSED) { + queue_entry_t *queue; + + queue = sched_local.ordered.src_queue; + + ODP_ASSERT(queue && lock_index <= queue->s.param.sched.lock_count); + + wait_for_order(queue); }
static void schedule_order_unlock(unsigned lock_index ODP_UNUSED)
commit 39acf771084aa4f16b60a6bdf9e5f3bed4f88cd9 Author: Matias Elo matias.elo@nokia.com Date: Fri Dec 2 12:56:26 2016 +0200
linux-gen: sched: add internal API for max number of ordered locks per queue
The number of supported ordered locks may vary between the scheduler implementations. Add an internal scheduler API call for fetching the maximum value from currently active scheduler.
Add an internal definition CONFIG_QUEUE_MAX_ORD_LOCKS for the scheduler independent maximum value.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_config_internal.h b/platform/linux-generic/include/odp_config_internal.h index 9401fa1..06550e6 100644 --- a/platform/linux-generic/include/odp_config_internal.h +++ b/platform/linux-generic/include/odp_config_internal.h @@ -22,6 +22,11 @@ extern "C" { #define ODP_CONFIG_QUEUES 1024
/* + * Maximum number of ordered locks per queue + */ +#define CONFIG_QUEUE_MAX_ORD_LOCKS 4 + +/* * Maximum number of packet IO resources */ #define ODP_CONFIG_PKTIO_ENTRIES 64 diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index 72af01e..6c2b050 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -14,12 +14,6 @@ extern "C" { #include <odp/api/queue.h> #include <odp/api/schedule.h>
-/* Constants defined by the scheduler. These should be converted into interface - * functions. */ - -/* Number of ordered locks per queue */ -#define SCHEDULE_ORDERED_LOCKS_PER_QUEUE 2 - typedef void (*schedule_pktio_start_fn_t)(int pktio_index, int num_in_queue, int in_queue_idx[]); typedef int (*schedule_thr_add_fn_t)(odp_schedule_group_t group, int thr); @@ -38,6 +32,7 @@ typedef int (*schedule_init_local_fn_t)(void); typedef int (*schedule_term_local_fn_t)(void); typedef void (*schedule_order_lock_fn_t)(void); typedef void (*schedule_order_unlock_fn_t)(void); +typedef unsigned (*schedule_max_ordered_locks_fn_t)(void);
typedef struct schedule_fn_t { schedule_pktio_start_fn_t pktio_start; @@ -54,6 +49,7 @@ typedef struct schedule_fn_t { schedule_term_local_fn_t term_local; schedule_order_lock_fn_t order_lock; schedule_order_unlock_fn_t order_unlock; + schedule_max_ordered_locks_fn_t max_ordered_locks; } schedule_fn_t;
/* Interface towards the scheduler */ diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 74f384d..99c91e7 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -70,8 +70,7 @@ static int queue_init(queue_entry_t *queue, const char *name, queue->s.name[ODP_QUEUE_NAME_LEN - 1] = 0; } memcpy(&queue->s.param, param, sizeof(odp_queue_param_t)); - if (queue->s.param.sched.lock_count > - SCHEDULE_ORDERED_LOCKS_PER_QUEUE) + if (queue->s.param.sched.lock_count > sched_fn->max_ordered_locks()) return -1;
if (param->type == ODP_QUEUE_TYPE_SCHED) @@ -162,7 +161,7 @@ int odp_queue_capability(odp_queue_capability_t *capa)
/* Reserve some queues for internal use */ capa->max_queues = ODP_CONFIG_QUEUES - NUM_INTERNAL_QUEUES; - capa->max_ordered_locks = SCHEDULE_ORDERED_LOCKS_PER_QUEUE; + capa->max_ordered_locks = sched_fn->max_ordered_locks(); capa->max_sched_groups = sched_fn->num_grps(); capa->sched_prios = odp_schedule_num_prio();
diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 50639ff..5bc274f 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -110,6 +110,12 @@ ODP_STATIC_ASSERT((8 * sizeof(pri_mask_t)) >= QUEUES_PER_PRIO, /* Maximum number of dequeues */ #define MAX_DEQ CONFIG_BURST_SIZE
+/* Maximum number of ordered locks per queue */ +#define MAX_ORDERED_LOCKS_PER_QUEUE 1 + +ODP_STATIC_ASSERT(MAX_ORDERED_LOCKS_PER_QUEUE <= CONFIG_QUEUE_MAX_ORD_LOCKS, + "Too_many_ordered_locks"); + /* Scheduler local data */ typedef struct { int thr; @@ -323,6 +329,11 @@ static int schedule_term_local(void) return 0; }
+static unsigned schedule_max_ordered_locks(void) +{ + return MAX_ORDERED_LOCKS_PER_QUEUE; +} + static inline int queue_per_prio(uint32_t queue_index) { return ((QUEUES_PER_PRIO - 1) & queue_index); @@ -1026,7 +1037,8 @@ const schedule_fn_t schedule_default_fn = { .init_local = schedule_init_local, .term_local = schedule_term_local, .order_lock = order_lock, - .order_unlock = order_unlock + .order_unlock = order_unlock, + .max_ordered_locks = schedule_max_ordered_locks };
/* Fill in scheduler API calls */ diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 069b8bf..76d1357 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -28,6 +28,10 @@ #define GROUP_ALL ODP_SCHED_GROUP_ALL #define GROUP_WORKER ODP_SCHED_GROUP_WORKER #define GROUP_CONTROL ODP_SCHED_GROUP_CONTROL +#define MAX_ORDERED_LOCKS_PER_QUEUE 1 + +ODP_STATIC_ASSERT(MAX_ORDERED_LOCKS_PER_QUEUE <= CONFIG_QUEUE_MAX_ORD_LOCKS, + "Too_many_ordered_locks");
struct sched_cmd_t;
@@ -162,6 +166,11 @@ static int term_local(void) return 0; }
+static unsigned max_ordered_locks(void) +{ + return MAX_ORDERED_LOCKS_PER_QUEUE; +} + static int thr_add(odp_schedule_group_t group, int thr) { sched_group_t *sched_group = &sched_global.sched_group; @@ -682,7 +691,8 @@ const schedule_fn_t schedule_sp_fn = { .init_local = init_local, .term_local = term_local, .order_lock = order_lock, - .order_unlock = order_unlock + .order_unlock = order_unlock, + .max_ordered_locks = max_ordered_locks };
/* Fill in scheduler API calls */
commit 0d6d0923b2dd4d3097ea992af76408fd4281d84e Author: Matias Elo matias.elo@nokia.com Date: Fri Dec 2 12:56:25 2016 +0200
linux-gen: sched: remove old ordered queue implementation
Remove old ordered queue code. Replaced temporarily by atomic handling.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index ed5088a..434e530 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -133,8 +133,6 @@ noinst_HEADERS = \ ${srcdir}/include/odp_queue_internal.h \ ${srcdir}/include/odp_ring_internal.h \ ${srcdir}/include/odp_schedule_if.h \ - ${srcdir}/include/odp_schedule_internal.h \ - ${srcdir}/include/odp_schedule_ordered_internal.h \ ${srcdir}/include/odp_sorted_list_internal.h \ ${srcdir}/include/odp_shm_internal.h \ ${srcdir}/include/odp_timer_internal.h \ @@ -186,7 +184,6 @@ __LIB__libodp_linux_la_SOURCES = \ odp_rwlock_recursive.c \ odp_schedule.c \ odp_schedule_if.c \ - odp_schedule_ordered.c \ odp_schedule_sp.c \ odp_shared_memory.c \ odp_sorted_list.c \ diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 4e75908..2064f7c 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -79,7 +79,6 @@ struct odp_buffer_hdr_t { uint32_t all; struct { uint32_t hdrdata:1; /* Data is in buffer hdr */ - uint32_t sustain:1; /* Sustain order */ }; } flags;
@@ -95,12 +94,6 @@ struct odp_buffer_hdr_t { uint32_t uarea_size; /* size of user area */ uint32_t segcount; /* segment count */ uint32_t segsize; /* segment size */ - uint64_t order; /* sequence for ordered queues */ - queue_entry_t *origin_qe; /* ordered queue origin */ - union { - queue_entry_t *target_qe; /* ordered queue target */ - uint64_t sync[SCHEDULE_ORDERED_LOCKS_PER_QUEUE]; - }; #ifdef _ODP_PKTIO_IPC /* ipc mapped process can not walk over pointers, * offset has to be used */ diff --git a/platform/linux-generic/include/odp_packet_io_queue.h b/platform/linux-generic/include/odp_packet_io_queue.h index 13b79f3..d1d4b22 100644 --- a/platform/linux-generic/include/odp_packet_io_queue.h +++ b/platform/linux-generic/include/odp_packet_io_queue.h @@ -28,11 +28,10 @@ extern "C" { ODP_STATIC_ASSERT(ODP_PKTIN_QUEUE_MAX_BURST >= QUEUE_MULTI_MAX, "ODP_PKTIN_DEQ_MULTI_MAX_ERROR");
-int pktin_enqueue(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain); +int pktin_enqueue(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr); odp_buffer_hdr_t *pktin_dequeue(queue_entry_t *queue);
-int pktin_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num, - int sustain); +int pktin_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num); int pktin_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num);
diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index e223d9f..df36b76 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -41,11 +41,11 @@ extern "C" { /* forward declaration */ union queue_entry_u;
-typedef int (*enq_func_t)(union queue_entry_u *, odp_buffer_hdr_t *, int); +typedef int (*enq_func_t)(union queue_entry_u *, odp_buffer_hdr_t *); typedef odp_buffer_hdr_t *(*deq_func_t)(union queue_entry_u *);
typedef int (*enq_multi_func_t)(union queue_entry_u *, - odp_buffer_hdr_t **, int, int); + odp_buffer_hdr_t **, int); typedef int (*deq_multi_func_t)(union queue_entry_u *, odp_buffer_hdr_t **, int);
@@ -68,12 +68,6 @@ struct queue_entry_s { odp_pktin_queue_t pktin; odp_pktout_queue_t pktout; char name[ODP_QUEUE_NAME_LEN]; - uint64_t order_in; - uint64_t order_out; - odp_buffer_hdr_t *reorder_head; - odp_buffer_hdr_t *reorder_tail; - odp_atomic_u64_t sync_in[SCHEDULE_ORDERED_LOCKS_PER_QUEUE]; - odp_atomic_u64_t sync_out[SCHEDULE_ORDERED_LOCKS_PER_QUEUE]; };
union queue_entry_u { @@ -84,24 +78,12 @@ union queue_entry_u {
queue_entry_t *get_qentry(uint32_t queue_id);
-int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain); +int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr); odp_buffer_hdr_t *queue_deq(queue_entry_t *queue);
-int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num, - int sustain); +int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num); int queue_deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num);
-int queue_pktout_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, - int sustain); -int queue_pktout_enq_multi(queue_entry_t *queue, - odp_buffer_hdr_t *buf_hdr[], int num, int sustain); - -int queue_tm_reenq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, - int sustain); -int queue_tm_reenq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], - int num, int sustain); -int queue_tm_reorder(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr); - void queue_lock(queue_entry_t *queue); void queue_unlock(queue_entry_t *queue);
diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index 37f88a4..72af01e 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -31,8 +31,7 @@ typedef int (*schedule_init_queue_fn_t)(uint32_t queue_index, typedef void (*schedule_destroy_queue_fn_t)(uint32_t queue_index); typedef int (*schedule_sched_queue_fn_t)(uint32_t queue_index); typedef int (*schedule_ord_enq_multi_fn_t)(uint32_t queue_index, - void *buf_hdr[], int num, - int sustain, int *ret); + void *buf_hdr[], int num, int *ret); typedef int (*schedule_init_global_fn_t)(void); typedef int (*schedule_term_global_fn_t)(void); typedef int (*schedule_init_local_fn_t)(void); diff --git a/platform/linux-generic/include/odp_schedule_internal.h b/platform/linux-generic/include/odp_schedule_internal.h deleted file mode 100644 index 02637c2..0000000 --- a/platform/linux-generic/include/odp_schedule_internal.h +++ /dev/null @@ -1,50 +0,0 @@ -/* Copyright (c) 2016, Linaro Limited - * All rights reserved. - * - * SPDX-License-Identifier: BSD-3-Clause - */ - -#ifndef ODP_SCHEDULE_INTERNAL_H_ -#define ODP_SCHEDULE_INTERNAL_H_ - -#ifdef __cplusplus -extern "C" { -#endif - -/* Maximum number of dequeues */ -#define MAX_DEQ CONFIG_BURST_SIZE - -typedef struct { - int thr; - int num; - int index; - int pause; - uint16_t round; - uint16_t prefer_offset; - uint16_t pktin_polls; - uint32_t queue_index; - odp_queue_t queue; - odp_event_t ev_stash[MAX_DEQ]; - void *origin_qe; - uint64_t order; - uint64_t sync[SCHEDULE_ORDERED_LOCKS_PER_QUEUE]; - odp_pool_t pool; - int enq_called; - int ignore_ordered_context; -} sched_local_t; - -extern __thread sched_local_t sched_local; - -void cache_order_info(uint32_t queue_index); -int release_order(void *origin_qe, uint64_t order, - odp_pool_t pool, int enq_called); - -/* API functions implemented in odp_schedule_ordered.c */ -void schedule_order_lock(unsigned lock_index); -void schedule_order_unlock(unsigned lock_index); - -#ifdef __cplusplus -} -#endif - -#endif diff --git a/platform/linux-generic/include/odp_schedule_ordered_internal.h b/platform/linux-generic/include/odp_schedule_ordered_internal.h deleted file mode 100644 index 0ffbe3a..0000000 --- a/platform/linux-generic/include/odp_schedule_ordered_internal.h +++ /dev/null @@ -1,25 +0,0 @@ -/* Copyright (c) 2016, Linaro Limited - * All rights reserved. - * - * SPDX-License-Identifier: BSD-3-Clause - */ - -#ifndef ODP_SCHEDULE_ORDERED_INTERNAL_H_ -#define ODP_SCHEDULE_ORDERED_INTERNAL_H_ - -#ifdef __cplusplus -extern "C" { -#endif - -#define SUSTAIN_ORDER 1 - -int schedule_ordered_queue_enq(uint32_t queue_index, void *p_buf_hdr, - int sustain, int *ret); -int schedule_ordered_queue_enq_multi(uint32_t queue_index, void *p_buf_hdr[], - int num, int sustain, int *ret); - -#ifdef __cplusplus -} -#endif - -#endif diff --git a/platform/linux-generic/odp_packet_io.c b/platform/linux-generic/odp_packet_io.c index 7566789..98460a5 100644 --- a/platform/linux-generic/odp_packet_io.c +++ b/platform/linux-generic/odp_packet_io.c @@ -570,7 +570,7 @@ static inline int pktin_recv_buf(odp_pktin_queue_t queue, int ret;
dst_queue = queue_to_qentry(pkt_hdr->dst_queue); - ret = queue_enq(dst_queue, buf_hdr, 0); + ret = queue_enq(dst_queue, buf_hdr); if (ret < 0) odp_packet_free(pkt); continue; @@ -619,7 +619,7 @@ int pktout_deq_multi(queue_entry_t *qentry ODP_UNUSED, }
int pktin_enqueue(queue_entry_t *qentry ODP_UNUSED, - odp_buffer_hdr_t *buf_hdr ODP_UNUSED, int sustain ODP_UNUSED) + odp_buffer_hdr_t *buf_hdr ODP_UNUSED) { ODP_ABORT("attempted enqueue to a pktin queue"); return -1; @@ -641,14 +641,13 @@ odp_buffer_hdr_t *pktin_dequeue(queue_entry_t *qentry) return NULL;
if (pkts > 1) - queue_enq_multi(qentry, &hdr_tbl[1], pkts - 1, 0); + queue_enq_multi(qentry, &hdr_tbl[1], pkts - 1); buf_hdr = hdr_tbl[0]; return buf_hdr; }
int pktin_enq_multi(queue_entry_t *qentry ODP_UNUSED, - odp_buffer_hdr_t *buf_hdr[] ODP_UNUSED, - int num ODP_UNUSED, int sustain ODP_UNUSED) + odp_buffer_hdr_t *buf_hdr[] ODP_UNUSED, int num ODP_UNUSED) { ODP_ABORT("attempted enqueue to a pktin queue"); return 0; @@ -682,7 +681,7 @@ int pktin_deq_multi(queue_entry_t *qentry, odp_buffer_hdr_t *buf_hdr[], int num) hdr_tbl[j] = hdr_tbl[i];
if (j) - queue_enq_multi(qentry, hdr_tbl, j, 0); + queue_enq_multi(qentry, hdr_tbl, j); return nbr; }
@@ -720,7 +719,7 @@ int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[])
queue = entry->s.in_queue[index[idx]].queue; qentry = queue_to_qentry(queue); - queue_enq_multi(qentry, hdr_tbl, num, 0); + queue_enq_multi(qentry, hdr_tbl, num); }
return 0; @@ -1386,9 +1385,9 @@ int odp_pktout_queue_config(odp_pktio_t pktio, qentry->s.pktout.pktio = pktio;
/* Override default enqueue / dequeue functions */ - qentry->s.enqueue = queue_pktout_enq; + qentry->s.enqueue = pktout_enqueue; qentry->s.dequeue = pktout_dequeue; - qentry->s.enqueue_multi = queue_pktout_enq_multi; + qentry->s.enqueue_multi = pktout_enq_multi; qentry->s.dequeue_multi = pktout_deq_multi;
entry->s.out_queue[i].queue = queue; diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 8c38c93..4be3827 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -588,7 +588,6 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], uint32_t mask, i; pool_cache_t *cache; uint32_t cache_num, num_ch, num_deq, burst; - odp_buffer_hdr_t *hdr;
ring = &pool->ring.hdr; mask = pool->ring_mask; @@ -609,13 +608,8 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], }
/* Get buffers from the cache */ - for (i = 0; i < num_ch; i++) { + for (i = 0; i < num_ch; i++) buf[i] = cache->buf[cache_num - num_ch + i]; - hdr = buf_hdl_to_hdr(buf[i]); - hdr->origin_qe = NULL; - if (buf_hdr) - buf_hdr[i] = hdr; - }
/* If needed, get more from the global pool */ if (odp_unlikely(num_deq)) { @@ -635,11 +629,9 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], uint32_t idx = num_ch + i;
buf[idx] = (odp_buffer_t)(uintptr_t)data[i]; - hdr = buf_hdl_to_hdr(buf[idx]); - hdr->origin_qe = NULL;
if (buf_hdr) { - buf_hdr[idx] = hdr; + buf_hdr[idx] = buf_hdl_to_hdr(buf[idx]); /* Prefetch newly allocated and soon to be used * buffer headers. */ odp_prefetch(buf_hdr[idx]); @@ -656,6 +648,11 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], cache->num = cache_num - num_ch; }
+ if (buf_hdr) { + for (i = 0; i < num_ch; i++) + buf_hdr[i] = buf_hdl_to_hdr(buf[i]); + } + return num_ch + num_deq; }
diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 43e212a..74f384d 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -23,7 +23,6 @@ #include <odp/api/hints.h> #include <odp/api/sync.h> #include <odp/api/traffic_mngr.h> -#include <odp_schedule_ordered_internal.h>
#define NUM_INTERNAL_QUEUES 64
@@ -90,16 +89,13 @@ static int queue_init(queue_entry_t *queue, const char *name, queue->s.head = NULL; queue->s.tail = NULL;
- queue->s.reorder_head = NULL; - queue->s.reorder_tail = NULL; - return 0; }
int odp_queue_init_global(void) { - uint32_t i, j; + uint32_t i; odp_shm_t shm;
ODP_DBG("Queue init ... "); @@ -119,10 +115,6 @@ int odp_queue_init_global(void) /* init locks */ queue_entry_t *queue = get_qentry(i); LOCK_INIT(&queue->s.lock); - for (j = 0; j < SCHEDULE_ORDERED_LOCKS_PER_QUEUE; j++) { - odp_atomic_init_u64(&queue->s.sync_in[j], 0); - odp_atomic_init_u64(&queue->s.sync_out[j], 0); - } queue->s.index = i; queue->s.handle = queue_from_id(i); } @@ -310,12 +302,6 @@ int odp_queue_destroy(odp_queue_t handle) ODP_ERR("queue "%s" not empty\n", queue->s.name); return -1; } - if (queue_is_ordered(queue) && queue->s.reorder_head) { - UNLOCK(&queue->s.lock); - ODP_ERR("queue "%s" reorder queue not empty\n", - queue->s.name); - return -1; - }
switch (queue->s.status) { case QUEUE_STATUS_READY: @@ -379,15 +365,14 @@ odp_queue_t odp_queue_lookup(const char *name) }
static inline int enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], - int num, int sustain) + int num) { int sched = 0; int i, ret; odp_buffer_hdr_t *hdr, *tail, *next_hdr;
- /* Ordered queues do not use bursts */ if (sched_fn->ord_enq_multi(queue->s.index, (void **)buf_hdr, num, - sustain, &ret)) + &ret)) return ret;
/* Optimize the common case of single enqueue */ @@ -395,12 +380,14 @@ static inline int enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], tail = buf_hdr[0]; hdr = tail; hdr->burst_num = 0; + hdr->next = NULL; } else { int next;
/* Start from the last buffer header */ tail = buf_hdr[num - 1]; hdr = tail; + hdr->next = NULL; next = num - 2;
while (1) { @@ -453,17 +440,16 @@ static inline int enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], return num; /* All events enqueued */ }
-int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num, - int sustain) +int queue_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], int num) { - return enq_multi(queue, buf_hdr, num, sustain); + return enq_multi(queue, buf_hdr, num); }
-int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, int sustain) +int queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) { int ret;
- ret = enq_multi(queue, &buf_hdr, 1, sustain); + ret = enq_multi(queue, &buf_hdr, 1);
if (ret == 1) return 0; @@ -486,7 +472,7 @@ int odp_queue_enq_multi(odp_queue_t handle, const odp_event_t ev[], int num) buf_hdr[i] = buf_hdl_to_hdr(odp_buffer_from_event(ev[i]));
return num == 0 ? 0 : queue->s.enqueue_multi(queue, buf_hdr, - num, SUSTAIN_ORDER); + num); }
int odp_queue_enq(odp_queue_t handle, odp_event_t ev) @@ -500,7 +486,7 @@ int odp_queue_enq(odp_queue_t handle, odp_event_t ev) /* No chains via this entry */ buf_hdr->link = NULL;
- return queue->s.enqueue(queue, buf_hdr, SUSTAIN_ORDER); + return queue->s.enqueue(queue, buf_hdr); }
static inline int deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], @@ -557,22 +543,6 @@ static inline int deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], i++; }
- /* Ordered queue book keeping inside the lock */ - if (queue_is_ordered(queue)) { - for (j = 0; j < i; j++) { - uint32_t k; - - buf_hdr[j]->origin_qe = queue; - buf_hdr[j]->order = queue->s.order_in++; - for (k = 0; k < queue->s.param.sched.lock_count; k++) { - buf_hdr[j]->sync[k] = - odp_atomic_fetch_inc_u64 - (&queue->s.sync_in[k]); - } - buf_hdr[j]->flags.sustain = SUSTAIN_ORDER; - } - } - /* Write head only if updated */ if (updated) queue->s.head = hdr; @@ -583,11 +553,6 @@ static inline int deq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[],
UNLOCK(&queue->s.lock);
- /* Init origin_qe for non-ordered queues */ - if (!queue_is_ordered(queue)) - for (j = 0; j < i; j++) - buf_hdr[j]->origin_qe = NULL; - return i; }
diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index cab68a3..50639ff 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -19,10 +19,9 @@ #include <odp/api/thrmask.h> #include <odp_config_internal.h> #include <odp_align_internal.h> -#include <odp_schedule_internal.h> -#include <odp_schedule_ordered_internal.h> #include <odp/api/sync.h> #include <odp_ring_internal.h> +#include <odp_queue_internal.h>
/* Number of priority levels */ #define NUM_PRIO 8 @@ -108,6 +107,24 @@ ODP_STATIC_ASSERT((8 * sizeof(pri_mask_t)) >= QUEUES_PER_PRIO, /* Start of named groups in group mask arrays */ #define SCHED_GROUP_NAMED (ODP_SCHED_GROUP_CONTROL + 1)
+/* Maximum number of dequeues */ +#define MAX_DEQ CONFIG_BURST_SIZE + +/* Scheduler local data */ +typedef struct { + int thr; + int num; + int index; + int pause; + uint16_t round; + uint16_t prefer_offset; + uint16_t pktin_polls; + uint32_t queue_index; + odp_queue_t queue; + odp_event_t ev_stash[MAX_DEQ]; + void *queue_entry; +} sched_local_t; + /* Priority queue */ typedef struct { /* Ring header */ @@ -465,23 +482,16 @@ static void schedule_release_atomic(void)
static void schedule_release_ordered(void) { - if (sched_local.origin_qe) { - int rc = release_order(sched_local.origin_qe, - sched_local.order, - sched_local.pool, - sched_local.enq_called); - if (rc == 0) - sched_local.origin_qe = NULL; - } + /* Process ordered queue as atomic */ + schedule_release_atomic(); + sched_local.queue_entry = NULL; }
static inline void schedule_release_context(void) { - if (sched_local.origin_qe != NULL) { - release_order(sched_local.origin_qe, sched_local.order, - sched_local.pool, sched_local.enq_called); - sched_local.origin_qe = NULL; - } else + if (sched_local.queue_entry != NULL) + schedule_release_ordered(); + else schedule_release_atomic(); }
@@ -500,6 +510,18 @@ static inline int copy_events(odp_event_t out_ev[], unsigned int max) return i; }
+static int schedule_ord_enq_multi(uint32_t queue_index, void *buf_hdr[], + int num, int *ret) +{ + (void)queue_index; + (void)buf_hdr; + (void)num; + (void)ret; + + /* didn't consume the events */ + return 0; +} + /* * Schedule queues */ @@ -596,12 +618,11 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[],
ordered = sched_cb_queue_is_ordered(qi);
- /* For ordered queues we want consecutive events to - * be dispatched to separate threads, so do not cache - * them locally. - */ - if (ordered) - max_deq = 1; + /* Do not cache ordered events locally to improve + * parallelism. Ordered context can only be released + * when the local cache is empty. */ + if (ordered && max_num < MAX_DEQ) + max_deq = max_num;
num = sched_cb_queue_deq_multi(qi, sched_local.ev_stash, max_deq); @@ -626,11 +647,9 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[], ret = copy_events(out_ev, max_num);
if (ordered) { - /* Continue scheduling ordered queues */ - ring_enq(ring, PRIO_QUEUE_MASK, qi); - - /* Cache order info about this event */ - cache_order_info(qi); + /* Operate as atomic */ + sched_local.queue_index = qi; + sched_local.queue_entry = get_qentry(qi); } else if (sched_cb_queue_is_atomic(qi)) { /* Hold queue during atomic access */ sched_local.queue_index = qi; @@ -763,6 +782,14 @@ static void order_unlock(void) { }
+static void schedule_order_lock(unsigned lock_index ODP_UNUSED) +{ +} + +static void schedule_order_unlock(unsigned lock_index ODP_UNUSED) +{ +} + static void schedule_pause(void) { sched_local.pause = 1; @@ -975,8 +1002,6 @@ static int schedule_sched_queue(uint32_t queue_index) int queue_per_prio = sched->queue[queue_index].queue_per_prio; ring_t *ring = &sched->prio_q[prio][queue_per_prio].ring;
- sched_local.ignore_ordered_context = 1; - ring_enq(ring, PRIO_QUEUE_MASK, queue_index); return 0; } @@ -995,7 +1020,7 @@ const schedule_fn_t schedule_default_fn = { .init_queue = schedule_init_queue, .destroy_queue = schedule_destroy_queue, .sched_queue = schedule_sched_queue, - .ord_enq_multi = schedule_ordered_queue_enq_multi, + .ord_enq_multi = schedule_ord_enq_multi, .init_global = schedule_init_global, .term_global = schedule_term_global, .init_local = schedule_init_local, diff --git a/platform/linux-generic/odp_schedule_ordered.c b/platform/linux-generic/odp_schedule_ordered.c deleted file mode 100644 index 5574faf..0000000 --- a/platform/linux-generic/odp_schedule_ordered.c +++ /dev/null @@ -1,818 +0,0 @@ -/* Copyright (c) 2016, Linaro Limited - * All rights reserved. - * - * SPDX-License-Identifier: BSD-3-Clause - */ - -#include <odp_packet_io_queue.h> -#include <odp_queue_internal.h> -#include <odp_schedule_if.h> -#include <odp_schedule_ordered_internal.h> -#include <odp_traffic_mngr_internal.h> -#include <odp_schedule_internal.h> - -#define RESOLVE_ORDER 0 -#define NOAPPEND 0 -#define APPEND 1 - -static inline void sched_enq_called(void) -{ - sched_local.enq_called = 1; -} - -static inline void get_sched_order(queue_entry_t **origin_qe, uint64_t *order) -{ - if (sched_local.ignore_ordered_context) { - sched_local.ignore_ordered_context = 0; - *origin_qe = NULL; - } else { - *origin_qe = sched_local.origin_qe; - *order = sched_local.order; - } -} - -static inline void sched_order_resolved(odp_buffer_hdr_t *buf_hdr) -{ - if (buf_hdr) - buf_hdr->origin_qe = NULL; - sched_local.origin_qe = NULL; -} - -static inline void get_qe_locks(queue_entry_t *qe1, queue_entry_t *qe2) -{ - /* Special case: enq to self */ - if (qe1 == qe2) { - queue_lock(qe1); - return; - } - - /* Since any queue can be either a source or target, queues do not have - * a natural locking hierarchy. Create one by using the qentry address - * as the ordering mechanism. - */ - - if (qe1 < qe2) { - queue_lock(qe1); - queue_lock(qe2); - } else { - queue_lock(qe2); - queue_lock(qe1); - } -} - -static inline void free_qe_locks(queue_entry_t *qe1, queue_entry_t *qe2) -{ - queue_unlock(qe1); - if (qe1 != qe2) - queue_unlock(qe2); -} - -static inline odp_buffer_hdr_t *get_buf_tail(odp_buffer_hdr_t *buf_hdr) -{ - odp_buffer_hdr_t *buf_tail = buf_hdr->link ? buf_hdr->link : buf_hdr; - - buf_hdr->next = buf_hdr->link; - buf_hdr->link = NULL; - - while (buf_tail->next) - buf_tail = buf_tail->next; - - return buf_tail; -} - -static inline void queue_add_list(queue_entry_t *queue, - odp_buffer_hdr_t *buf_head, - odp_buffer_hdr_t *buf_tail) -{ - if (queue->s.head) - queue->s.tail->next = buf_head; - else - queue->s.head = buf_head; - - queue->s.tail = buf_tail; -} - -static inline void queue_add_chain(queue_entry_t *queue, - odp_buffer_hdr_t *buf_hdr) -{ - queue_add_list(queue, buf_hdr, get_buf_tail(buf_hdr)); -} - -static inline void reorder_enq(queue_entry_t *queue, - uint64_t order, - queue_entry_t *origin_qe, - odp_buffer_hdr_t *buf_hdr, - int sustain) -{ - odp_buffer_hdr_t *reorder_buf = origin_qe->s.reorder_head; - odp_buffer_hdr_t *reorder_prev = NULL; - - while (reorder_buf && order >= reorder_buf->order) { - reorder_prev = reorder_buf; - reorder_buf = reorder_buf->next; - } - - buf_hdr->next = reorder_buf; - - if (reorder_prev) - reorder_prev->next = buf_hdr; - else - origin_qe->s.reorder_head = buf_hdr; - - if (!reorder_buf) - origin_qe->s.reorder_tail = buf_hdr; - - buf_hdr->origin_qe = origin_qe; - buf_hdr->target_qe = queue; - buf_hdr->order = order; - buf_hdr->flags.sustain = sustain; -} - -static inline void order_release(queue_entry_t *origin_qe, int count) -{ - uint64_t sync; - uint32_t i; - - origin_qe->s.order_out += count; - - for (i = 0; i < origin_qe->s.param.sched.lock_count; i++) { - sync = odp_atomic_load_u64(&origin_qe->s.sync_out[i]); - if (sync < origin_qe->s.order_out) - odp_atomic_fetch_add_u64(&origin_qe->s.sync_out[i], - origin_qe->s.order_out - sync); - } -} - -static inline int reorder_deq(queue_entry_t *queue, - queue_entry_t *origin_qe, - odp_buffer_hdr_t **reorder_tail_return, - odp_buffer_hdr_t **placeholder_buf_return, - int *release_count_return, - int *placeholder_count_return) -{ - odp_buffer_hdr_t *reorder_buf = origin_qe->s.reorder_head; - odp_buffer_hdr_t *reorder_tail = NULL; - odp_buffer_hdr_t *placeholder_buf = NULL; - odp_buffer_hdr_t *next_buf; - int deq_count = 0; - int release_count = 0; - int placeholder_count = 0; - - while (reorder_buf && - reorder_buf->order <= origin_qe->s.order_out + - release_count + placeholder_count) { - /* - * Elements on the reorder list fall into one of - * three categories: - * - * 1. Those destined for the same queue. These - * can be enq'd now if they were waiting to - * be unblocked by this enq. - * - * 2. Those representing placeholders for events - * whose ordering was released by a prior - * odp_schedule_release_ordered() call. These - * can now just be freed. - * - * 3. Those representing events destined for another - * queue. These cannot be consolidated with this - * enq since they have a different target. - * - * Detecting an element with an order sequence gap, an - * element in category 3, or running out of elements - * stops the scan. - */ - next_buf = reorder_buf->next; - - if (odp_likely(reorder_buf->target_qe == queue)) { - /* promote any chain */ - odp_buffer_hdr_t *reorder_link = - reorder_buf->link; - - if (reorder_link) { - reorder_buf->next = reorder_link; - reorder_buf->link = NULL; - while (reorder_link->next) - reorder_link = reorder_link->next; - reorder_link->next = next_buf; - reorder_tail = reorder_link; - } else { - reorder_tail = reorder_buf; - } - - deq_count++; - if (!reorder_buf->flags.sustain) - release_count++; - reorder_buf = next_buf; - } else if (!reorder_buf->target_qe) { - if (reorder_tail) - reorder_tail->next = next_buf; - else - origin_qe->s.reorder_head = next_buf; - - reorder_buf->next = placeholder_buf; - placeholder_buf = reorder_buf; - - reorder_buf = next_buf; - placeholder_count++; - } else { - break; - } - } - - *reorder_tail_return = reorder_tail; - *placeholder_buf_return = placeholder_buf; - *release_count_return = release_count; - *placeholder_count_return = placeholder_count; - - return deq_count; -} - -static inline void reorder_complete(queue_entry_t *origin_qe, - odp_buffer_hdr_t **reorder_buf_return, - odp_buffer_hdr_t **placeholder_buf, - int placeholder_append) -{ - odp_buffer_hdr_t *reorder_buf = origin_qe->s.reorder_head; - odp_buffer_hdr_t *next_buf; - - *reorder_buf_return = NULL; - if (!placeholder_append) - *placeholder_buf = NULL; - - while (reorder_buf && - reorder_buf->order <= origin_qe->s.order_out) { - next_buf = reorder_buf->next; - - if (!reorder_buf->target_qe) { - origin_qe->s.reorder_head = next_buf; - reorder_buf->next = *placeholder_buf; - *placeholder_buf = reorder_buf; - - reorder_buf = next_buf; - order_release(origin_qe, 1); - } else if (reorder_buf->flags.sustain) { - reorder_buf = next_buf; - } else { - *reorder_buf_return = origin_qe->s.reorder_head; - origin_qe->s.reorder_head = - origin_qe->s.reorder_head->next; - break; - } - } -} - -static inline void get_queue_order(queue_entry_t **origin_qe, uint64_t *order, - odp_buffer_hdr_t *buf_hdr) -{ - if (buf_hdr && buf_hdr->origin_qe) { - *origin_qe = buf_hdr->origin_qe; - *order = buf_hdr->order; - } else { - get_sched_order(origin_qe, order); - } -} - -int queue_tm_reenq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, - int sustain ODP_UNUSED) -{ - odp_tm_queue_t tm_queue = MAKE_ODP_TM_QUEUE((uint8_t *)queue - - offsetof(tm_queue_obj_t, - tm_qentry)); - odp_packet_t pkt = (odp_packet_t)buf_hdr->handle.handle; - - return odp_tm_enq(tm_queue, pkt); -} - -int queue_tm_reenq_multi(queue_entry_t *queue ODP_UNUSED, - odp_buffer_hdr_t *buf[] ODP_UNUSED, - int num ODP_UNUSED, - int sustain ODP_UNUSED) -{ - ODP_ABORT("Invalid call to queue_tm_reenq_multi()\n"); - return 0; -} - -int queue_tm_reorder(queue_entry_t *queue, - odp_buffer_hdr_t *buf_hdr) -{ - queue_entry_t *origin_qe; - uint64_t order; - - get_queue_order(&origin_qe, &order, buf_hdr); - - if (!origin_qe) - return 0; - - /* Check if we're in order */ - queue_lock(origin_qe); - if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { - queue_unlock(origin_qe); - ODP_ERR("Bad origin queue status\n"); - return 0; - } - - sched_enq_called(); - - /* Wait if it's not our turn */ - if (order > origin_qe->s.order_out) { - reorder_enq(queue, order, origin_qe, buf_hdr, SUSTAIN_ORDER); - queue_unlock(origin_qe); - return 1; - } - - /* Back to TM to handle enqueue - * - * Note: Order will be resolved by a subsequent call to - * odp_schedule_release_ordered() or odp_schedule() as odp_tm_enq() - * calls never resolve order by themselves. - */ - queue_unlock(origin_qe); - return 0; -} - -static int queue_enq_internal(odp_buffer_hdr_t *buf_hdr) -{ - return buf_hdr->target_qe->s.enqueue(buf_hdr->target_qe, buf_hdr, - buf_hdr->flags.sustain); -} - -static int ordered_queue_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, - int sustain, queue_entry_t *origin_qe, - uint64_t order) -{ - odp_buffer_hdr_t *reorder_buf; - odp_buffer_hdr_t *next_buf; - odp_buffer_hdr_t *reorder_tail; - odp_buffer_hdr_t *placeholder_buf = NULL; - int release_count, placeholder_count; - int sched = 0; - - /* Need two locks for enq operations from ordered queues */ - get_qe_locks(origin_qe, queue); - - if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY || - queue->s.status < QUEUE_STATUS_READY)) { - free_qe_locks(queue, origin_qe); - ODP_ERR("Bad queue status\n"); - ODP_ERR("queue = %s, origin q = %s, buf = %p\n", - queue->s.name, origin_qe->s.name, buf_hdr); - return -1; - } - - /* Remember that enq was called for this order */ - sched_enq_called(); - - /* We can only complete this enq if we're in order */ - if (order > origin_qe->s.order_out) { - reorder_enq(queue, order, origin_qe, buf_hdr, sustain); - - /* This enq can't complete until order is restored, so - * we're done here. - */ - free_qe_locks(queue, origin_qe); - return 0; - } - - /* Resolve order if requested */ - if (!sustain) { - order_release(origin_qe, 1); - sched_order_resolved(buf_hdr); - } - - /* Update queue status */ - if (queue->s.status == QUEUE_STATUS_NOTSCHED) { - queue->s.status = QUEUE_STATUS_SCHED; - sched = 1; - } - - /* We're in order, however the reorder queue may have other buffers - * sharing this order on it and this buffer must not be enqueued ahead - * of them. If the reorder queue is empty we can short-cut and - * simply add to the target queue directly. - */ - - if (!origin_qe->s.reorder_head) { - queue_add_chain(queue, buf_hdr); - free_qe_locks(queue, origin_qe); - - /* Add queue to scheduling */ - if (sched && sched_fn->sched_queue(queue->s.index)) - ODP_ABORT("schedule_queue failed\n"); - return 0; - } - - /* The reorder_queue is non-empty, so sort this buffer into it. Note - * that we force the sustain bit on here because we'll be removing - * this immediately and we already accounted for this order earlier. - */ - reorder_enq(queue, order, origin_qe, buf_hdr, 1); - - /* Pick up this element, and all others resolved by this enq, - * and add them to the target queue. - */ - reorder_deq(queue, origin_qe, &reorder_tail, &placeholder_buf, - &release_count, &placeholder_count); - - /* Move the list from the reorder queue to the target queue */ - if (queue->s.head) - queue->s.tail->next = origin_qe->s.reorder_head; - else - queue->s.head = origin_qe->s.reorder_head; - queue->s.tail = reorder_tail; - origin_qe->s.reorder_head = reorder_tail->next; - reorder_tail->next = NULL; - - /* Reflect resolved orders in the output sequence */ - order_release(origin_qe, release_count + placeholder_count); - - /* Now handle any resolved orders for events destined for other - * queues, appending placeholder bufs as needed. - */ - if (origin_qe != queue) - queue_unlock(queue); - - /* Add queue to scheduling */ - if (sched && sched_fn->sched_queue(queue->s.index)) - ODP_ABORT("schedule_queue failed\n"); - - reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, APPEND); - queue_unlock(origin_qe); - - if (reorder_buf) - queue_enq_internal(reorder_buf); - - /* Free all placeholder bufs that are now released */ - while (placeholder_buf) { - next_buf = placeholder_buf->next; - odp_buffer_free(placeholder_buf->handle.handle); - placeholder_buf = next_buf; - } - - return 0; -} - -int schedule_ordered_queue_enq_multi(uint32_t queue_index, void *p_buf_hdr[], - int num, int sustain, int *ret) -{ - queue_entry_t *origin_qe; - uint64_t order; - int i, rc; - queue_entry_t *qe = get_qentry(queue_index); - odp_buffer_hdr_t *first_hdr = p_buf_hdr[0]; - odp_buffer_hdr_t **buf_hdr = (odp_buffer_hdr_t **)p_buf_hdr; - - /* Chain input buffers together */ - for (i = 0; i < num - 1; i++) { - buf_hdr[i]->next = buf_hdr[i + 1]; - buf_hdr[i]->burst_num = 0; - } - - buf_hdr[num - 1]->next = NULL; - - /* Handle ordered enqueues commonly via links */ - get_queue_order(&origin_qe, &order, first_hdr); - if (origin_qe) { - first_hdr->link = first_hdr->next; - rc = ordered_queue_enq(qe, first_hdr, sustain, - origin_qe, order); - *ret = rc == 0 ? num : rc; - return 1; - } - - return 0; -} - -int queue_pktout_enq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr, - int sustain) -{ - queue_entry_t *origin_qe; - uint64_t order; - int rc; - - /* Special processing needed only if we came from an ordered queue */ - get_queue_order(&origin_qe, &order, buf_hdr); - if (!origin_qe) - return pktout_enqueue(queue, buf_hdr); - - /* Must lock origin_qe for ordered processing */ - queue_lock(origin_qe); - if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { - queue_unlock(origin_qe); - ODP_ERR("Bad origin queue status\n"); - return -1; - } - - /* We can only complete the enq if we're in order */ - sched_enq_called(); - if (order > origin_qe->s.order_out) { - reorder_enq(queue, order, origin_qe, buf_hdr, sustain); - - /* This enq can't complete until order is restored, so - * we're done here. - */ - queue_unlock(origin_qe); - return 0; - } - - /* Perform our enq since we're in order. - * Note: Don't hold the origin_qe lock across an I/O operation! - */ - queue_unlock(origin_qe); - - /* Handle any chained buffers (internal calls) */ - if (buf_hdr->link) { - odp_buffer_hdr_t *buf_hdrs[QUEUE_MULTI_MAX]; - odp_buffer_hdr_t *next_buf; - int num = 0; - - next_buf = buf_hdr->link; - buf_hdr->link = NULL; - - while (next_buf) { - buf_hdrs[num++] = next_buf; - next_buf = next_buf->next; - } - - rc = pktout_enq_multi(queue, buf_hdrs, num); - if (rc < num) - return -1; - } else { - rc = pktout_enqueue(queue, buf_hdr); - if (rc) - return rc; - } - - /* Reacquire the lock following the I/O send. Note that we're still - * guaranteed to be in order here since we haven't released - * order yet. - */ - queue_lock(origin_qe); - if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { - queue_unlock(origin_qe); - ODP_ERR("Bad origin queue status\n"); - return -1; - } - - /* Account for this ordered enq */ - if (!sustain) { - order_release(origin_qe, 1); - sched_order_resolved(NULL); - } - - /* Now check to see if our successful enq has unblocked other buffers - * in the origin's reorder queue. - */ - odp_buffer_hdr_t *reorder_buf; - odp_buffer_hdr_t *next_buf; - odp_buffer_hdr_t *reorder_tail; - odp_buffer_hdr_t *xmit_buf; - odp_buffer_hdr_t *placeholder_buf; - int release_count, placeholder_count; - - /* Send released buffers as well */ - if (reorder_deq(queue, origin_qe, &reorder_tail, &placeholder_buf, - &release_count, &placeholder_count)) { - xmit_buf = origin_qe->s.reorder_head; - origin_qe->s.reorder_head = reorder_tail->next; - reorder_tail->next = NULL; - queue_unlock(origin_qe); - - do { - next_buf = xmit_buf->next; - pktout_enqueue(queue, xmit_buf); - xmit_buf = next_buf; - } while (xmit_buf); - - /* Reacquire the origin_qe lock to continue */ - queue_lock(origin_qe); - if (odp_unlikely(origin_qe->s.status < QUEUE_STATUS_READY)) { - queue_unlock(origin_qe); - ODP_ERR("Bad origin queue status\n"); - return -1; - } - } - - /* Update the order sequence to reflect the deq'd elements */ - order_release(origin_qe, release_count + placeholder_count); - - /* Now handle sends to other queues that are ready to go */ - reorder_complete(origin_qe, &reorder_buf, &placeholder_buf, APPEND); - - /* We're fully done with the origin_qe at last */ - queue_unlock(origin_qe); - - /* Now send the next buffer to its target queue */ - if (reorder_buf) - queue_enq_internal(reorder_buf); - - /* Free all placeholder bufs that are now released */ - while (placeholder_buf) { - next_buf = placeholder_buf->next; - odp_buffer_free(placeholder_buf->handle.handle); - placeholder_buf = next_buf; - } - - return 0; -} - -int queue_pktout_enq_multi(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr[], - int num, int sustain) -{ - int i, rc; - queue_entry_t *origin_qe; - uint64_t order; - - /* If we're not ordered, handle directly */ - get_queue_order(&origin_qe, &order, buf_hdr[0]); - if (!origin_qe) - return pktout_enq_multi(queue, buf_hdr, num); - - /* Chain input buffers together */ - for (i = 0; i < num - 1; i++) - buf_hdr[i]->next = buf_hdr[i + 1]; - - buf_hdr[num - 1]->next = NULL; - - /* Handle commonly via links */ - buf_hdr[0]->link = buf_hdr[0]->next; - rc = queue_pktout_enq(queue, buf_hdr[0], sustain); - return rc == 0 ? num : rc; -} - -/* These routines exists here rather than in odp_schedule - * because they operate on queue interenal structures - */ -int release_order(void *origin_qe_ptr, uint64_t order, - odp_pool_t pool, int enq_called) -{ - odp_buffer_t placeholder_buf; - odp_buffer_hdr_t *placeholder_buf_hdr, *reorder_buf, *next_buf; - queue_entry_t *origin_qe = origin_qe_ptr; - - /* Must lock the origin queue to process the release */ - queue_lock(origin_qe); - - /* If we are in order we can release immediately since there can be no - * confusion about intermediate elements - */ - if (order <= origin_qe->s.order_out) { - reorder_buf = origin_qe->s.reorder_head; - - /* We're in order, however there may be one or more events on - * the reorder queue that are part of this order. If that is - * the case, remove them and let ordered_queue_enq() handle - * them and resolve the order for us. - */ - if (reorder_buf && reorder_buf->order == order) { - odp_buffer_hdr_t *reorder_head = reorder_buf; - - next_buf = reorder_buf->next; - - while (next_buf && next_buf->order == order) { - reorder_buf = next_buf; - next_buf = next_buf->next; - } - - origin_qe->s.reorder_head = reorder_buf->next; - reorder_buf->next = NULL; - - queue_unlock(origin_qe); - reorder_head->link = reorder_buf->next; - return ordered_queue_enq(reorder_head->target_qe, - reorder_head, RESOLVE_ORDER, - origin_qe, order); - } - - /* Reorder queue has no elements for this order, so it's safe - * to resolve order here - */ - order_release(origin_qe, 1); - - /* Check if this release allows us to unblock waiters. At the - * point of this call, the reorder list may contain zero or - * more placeholders that need to be freed, followed by zero - * or one complete reorder buffer chain. Note that since we - * are releasing order, we know no further enqs for this order - * can occur, so ignore the sustain bit to clear out our - * element(s) on the reorder queue - */ - reorder_complete(origin_qe, &reorder_buf, - &placeholder_buf_hdr, NOAPPEND); - - /* Now safe to unlock */ - queue_unlock(origin_qe); - - /* If reorder_buf has a target, do the enq now */ - if (reorder_buf) - queue_enq_internal(reorder_buf); - - while (placeholder_buf_hdr) { - odp_buffer_hdr_t *placeholder_next = - placeholder_buf_hdr->next; - - odp_buffer_free(placeholder_buf_hdr->handle.handle); - placeholder_buf_hdr = placeholder_next; - } - - return 0; - } - - /* If we are not in order we need a placeholder to represent our - * "place in line" unless we have issued enqs, in which case we - * already have a place in the reorder queue. If we need a - * placeholder, use an element from the same pool we were scheduled - * with is from, otherwise just ensure that the final element for our - * order is not marked sustain. - */ - if (enq_called) { - reorder_buf = NULL; - next_buf = origin_qe->s.reorder_head; - - while (next_buf && next_buf->order <= order) { - reorder_buf = next_buf; - next_buf = next_buf->next; - } - - if (reorder_buf && reorder_buf->order == order) { - reorder_buf->flags.sustain = 0; - queue_unlock(origin_qe); - return 0; - } - } - - placeholder_buf = odp_buffer_alloc(pool); - - /* Can't release if no placeholder is available */ - if (odp_unlikely(placeholder_buf == ODP_BUFFER_INVALID)) { - queue_unlock(origin_qe); - return -1; - } - - placeholder_buf_hdr = buf_hdl_to_hdr(placeholder_buf); - - /* Copy info to placeholder and add it to the reorder queue */ - placeholder_buf_hdr->origin_qe = origin_qe; - placeholder_buf_hdr->order = order; - placeholder_buf_hdr->flags.sustain = 0; - - reorder_enq(NULL, order, origin_qe, placeholder_buf_hdr, 0); - - queue_unlock(origin_qe); - return 0; -} - -void schedule_order_lock(unsigned lock_index) -{ - queue_entry_t *origin_qe; - uint64_t sync, sync_out; - - origin_qe = sched_local.origin_qe; - if (!origin_qe || lock_index >= origin_qe->s.param.sched.lock_count) - return; - - sync = sched_local.sync[lock_index]; - sync_out = odp_atomic_load_u64(&origin_qe->s.sync_out[lock_index]); - ODP_ASSERT(sync >= sync_out); - - /* Wait until we are in order. Note that sync_out will be incremented - * both by unlocks as well as order resolution, so we're OK if only - * some events in the ordered flow need to lock. - */ - while (sync != sync_out) { - odp_cpu_pause(); - sync_out = - odp_atomic_load_u64(&origin_qe->s.sync_out[lock_index]); - } -} - -void schedule_order_unlock(unsigned lock_index) -{ - queue_entry_t *origin_qe; - - origin_qe = sched_local.origin_qe; - if (!origin_qe || lock_index >= origin_qe->s.param.sched.lock_count) - return; - ODP_ASSERT(sched_local.sync[lock_index] == - odp_atomic_load_u64(&origin_qe->s.sync_out[lock_index])); - - /* Release the ordered lock */ - odp_atomic_fetch_inc_u64(&origin_qe->s.sync_out[lock_index]); -} - -void cache_order_info(uint32_t queue_index) -{ - uint32_t i; - queue_entry_t *qe = get_qentry(queue_index); - odp_event_t ev = sched_local.ev_stash[0]; - odp_buffer_hdr_t *buf_hdr = buf_hdl_to_hdr(odp_buffer_from_event(ev)); - - sched_local.origin_qe = qe; - sched_local.order = buf_hdr->order; - sched_local.pool = buf_hdr->pool_hdl; - - for (i = 0; i < qe->s.param.sched.lock_count; i++) - sched_local.sync[i] = buf_hdr->sync[i]; - - sched_local.enq_called = 0; -} diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 5090a5c..069b8bf 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -299,12 +299,11 @@ static int sched_queue(uint32_t qi) }
static int ord_enq_multi(uint32_t queue_index, void *buf_hdr[], int num, - int sustain, int *ret) + int *ret) { (void)queue_index; (void)buf_hdr; (void)num; - (void)sustain; (void)ret;
/* didn't consume the events */ diff --git a/platform/linux-generic/odp_traffic_mngr.c b/platform/linux-generic/odp_traffic_mngr.c index 62e5c63..9dc3a86 100644 --- a/platform/linux-generic/odp_traffic_mngr.c +++ b/platform/linux-generic/odp_traffic_mngr.c @@ -99,6 +99,24 @@ static odp_bool_t tm_demote_pkt_desc(tm_system_t *tm_system, tm_shaper_obj_t *timer_shaper, pkt_desc_t *demoted_pkt_desc);
+static int queue_tm_reenq(queue_entry_t *queue, odp_buffer_hdr_t *buf_hdr) +{ + odp_tm_queue_t tm_queue = MAKE_ODP_TM_QUEUE((uint8_t *)queue - + offsetof(tm_queue_obj_t, + tm_qentry)); + odp_packet_t pkt = (odp_packet_t)buf_hdr->handle.handle; + + return odp_tm_enq(tm_queue, pkt); +} + +static int queue_tm_reenq_multi(queue_entry_t *queue ODP_UNUSED, + odp_buffer_hdr_t *buf[] ODP_UNUSED, + int num ODP_UNUSED) +{ + ODP_ABORT("Invalid call to queue_tm_reenq_multi()\n"); + return 0; +} + static tm_queue_obj_t *get_tm_queue_obj(tm_system_t *tm_system, pkt_desc_t *pkt_desc) { @@ -1861,13 +1879,6 @@ static int tm_enqueue(tm_system_t *tm_system, odp_bool_t drop_eligible, drop; uint32_t frame_len, pkt_depth; int rc; - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); - - /* If we're from an ordered queue and not in order - * record the event and wait until order is resolved - */ - if (queue_tm_reorder(&tm_queue_obj->tm_qentry, &pkt_hdr->buf_hdr)) - return 0;
tm_group = GET_TM_GROUP(tm_system->odp_tm_group); if (tm_group->first_enq == 0) { @@ -1888,7 +1899,10 @@ static int tm_enqueue(tm_system_t *tm_system,
work_item.queue_num = tm_queue_obj->queue_num; work_item.pkt = pkt; + sched_fn->order_lock(); rc = input_work_queue_append(tm_system, &work_item); + sched_fn->order_unlock(); + if (rc < 0) { ODP_DBG("%s work queue full\n", __func__); return rc; diff --git a/platform/linux-generic/pktio/loop.c b/platform/linux-generic/pktio/loop.c index 28dd404..7096283 100644 --- a/platform/linux-generic/pktio/loop.c +++ b/platform/linux-generic/pktio/loop.c @@ -169,7 +169,7 @@ static int loopback_send(pktio_entry_t *pktio_entry, int index ODP_UNUSED, odp_ticketlock_lock(&pktio_entry->s.txl);
qentry = queue_to_qentry(pktio_entry->s.pkt_loop.loopq); - ret = queue_enq_multi(qentry, hdr_tbl, len, 0); + ret = queue_enq_multi(qentry, hdr_tbl, len);
if (ret > 0) { pktio_entry->s.stats.out_ucast_pkts += ret;
commit 2997a78f270cdb34c82f805f8103f660ed1bcdf2 Author: Matias Elo matias.elo@nokia.com Date: Fri Dec 2 12:56:24 2016 +0200
linux-gen: sched: add internal APIs for locking/unlocking ordered processing
The internal ordered processing locking functions can be more streamlined compared to the public API functions.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_schedule_if.h b/platform/linux-generic/include/odp_schedule_if.h index df73e70..37f88a4 100644 --- a/platform/linux-generic/include/odp_schedule_if.h +++ b/platform/linux-generic/include/odp_schedule_if.h @@ -37,6 +37,8 @@ typedef int (*schedule_init_global_fn_t)(void); typedef int (*schedule_term_global_fn_t)(void); typedef int (*schedule_init_local_fn_t)(void); typedef int (*schedule_term_local_fn_t)(void); +typedef void (*schedule_order_lock_fn_t)(void); +typedef void (*schedule_order_unlock_fn_t)(void);
typedef struct schedule_fn_t { schedule_pktio_start_fn_t pktio_start; @@ -51,6 +53,8 @@ typedef struct schedule_fn_t { schedule_term_global_fn_t term_global; schedule_init_local_fn_t init_local; schedule_term_local_fn_t term_local; + schedule_order_lock_fn_t order_lock; + schedule_order_unlock_fn_t order_unlock; } schedule_fn_t;
/* Interface towards the scheduler */ diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index dfc9555..cab68a3 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -755,6 +755,14 @@ static int schedule_multi(odp_queue_t *out_queue, uint64_t wait, return schedule_loop(out_queue, wait, events, num); }
+static void order_lock(void) +{ +} + +static void order_unlock(void) +{ +} + static void schedule_pause(void) { sched_local.pause = 1; @@ -991,7 +999,9 @@ const schedule_fn_t schedule_default_fn = { .init_global = schedule_init_global, .term_global = schedule_term_global, .init_local = schedule_init_local, - .term_local = schedule_term_local + .term_local = schedule_term_local, + .order_lock = order_lock, + .order_unlock = order_unlock };
/* Fill in scheduler API calls */ diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 8b355da..5090a5c 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -660,6 +660,14 @@ static void schedule_order_unlock(unsigned lock_index) (void)lock_index; }
+static void order_lock(void) +{ +} + +static void order_unlock(void) +{ +} + /* Fill in scheduler interface */ const schedule_fn_t schedule_sp_fn = { .pktio_start = pktio_start, @@ -673,7 +681,9 @@ const schedule_fn_t schedule_sp_fn = { .init_global = init_global, .term_global = term_global, .init_local = init_local, - .term_local = term_local + .term_local = term_local, + .order_lock = order_lock, + .order_unlock = order_unlock };
/* Fill in scheduler API calls */
commit bacd73a34768ce859f8136f29bda70bbccbdb45e Author: Bill Fischofer bill.fischofer@linaro.org Date: Wed Nov 30 17:08:48 2016 -0600
linux-generic: pool: reset origin_qe on buffer allocation
Resolve bug https://bugs.linaro.org/show_bug.cgi?id=2622 by re-initializing origin_qe to NULL when a buffer is allocated. This step was omitted in the switch to ring pool allocation introduced in commit ID c8cf1d87783d4b4c628f219803b78731b8d4ade4
Signed-off-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-and-tested-by: Yi He yi.he@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 4be3827..8c38c93 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -588,6 +588,7 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], uint32_t mask, i; pool_cache_t *cache; uint32_t cache_num, num_ch, num_deq, burst; + odp_buffer_hdr_t *hdr;
ring = &pool->ring.hdr; mask = pool->ring_mask; @@ -608,8 +609,13 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], }
/* Get buffers from the cache */ - for (i = 0; i < num_ch; i++) + for (i = 0; i < num_ch; i++) { buf[i] = cache->buf[cache_num - num_ch + i]; + hdr = buf_hdl_to_hdr(buf[i]); + hdr->origin_qe = NULL; + if (buf_hdr) + buf_hdr[i] = hdr; + }
/* If needed, get more from the global pool */ if (odp_unlikely(num_deq)) { @@ -629,9 +635,11 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], uint32_t idx = num_ch + i;
buf[idx] = (odp_buffer_t)(uintptr_t)data[i]; + hdr = buf_hdl_to_hdr(buf[idx]); + hdr->origin_qe = NULL;
if (buf_hdr) { - buf_hdr[idx] = buf_hdl_to_hdr(buf[idx]); + buf_hdr[idx] = hdr; /* Prefetch newly allocated and soon to be used * buffer headers. */ odp_prefetch(buf_hdr[idx]); @@ -648,11 +656,6 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], cache->num = cache_num - num_ch; }
- if (buf_hdr) { - for (i = 0; i < num_ch; i++) - buf_hdr[i] = buf_hdl_to_hdr(buf[i]); - } - return num_ch + num_deq; }
commit 63ef9b3714c9410dd1b5a55e3bd50de49f23dfcb Author: Christophe Milard christophe.milard@linaro.org Date: Tue Nov 8 10:49:30 2016 +0100
doc: shm: defining behaviour when blocks have same name
Defining the reserve and lookup behaviour when multiple blocks are reserved using the same name.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/doc/users-guide/users-guide.adoc b/doc/users-guide/users-guide.adoc index 078dd7c..9a427fa 100755 --- a/doc/users-guide/users-guide.adoc +++ b/doc/users-guide/users-guide.adoc @@ -594,7 +594,9 @@ resource. Blocks of shared memory can be created using the `odp_shm_reserve()` API call. The call expects a shared memory block name, a block size, an alignment requirement, and optional flags as parameters. It returns a `odp_shm_t` -handle. The size and alignment requirement are given in bytes. +handle. The size and alignment requirement are given in bytes. The provided +name does not have to be unique, i.e. a given name can be used multiple times, +when reserving different blocks.
.creating a block of shared memory [source,c] @@ -670,7 +672,9 @@ block is to use the `odp_shm_lookup()` API function call. This nevertheless requires the calling ODP thread to provide the name of the shared memory block: `odp_shm_lookup()` will return `ODP_SHM_INVALID` if no shared memory block -with the provided name is known by ODP. +with the provided name is known by ODP. When multiple blocks were reserved +using the same name, the lookup function will return the handle of any +of these blocks.
.retrieving a block handle and address from another ODP task [source,c]
commit 4ee154864b47712a45cfdb23ea6c22b46bfb1abf Author: Christophe Milard christophe.milard@linaro.org Date: Tue Nov 8 10:49:29 2016 +0100
test: api: shm: test using the same block name multiple times
Make sure that many memory blocks can be created with the name.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/shmem/shmem.c b/test/common_plat/validation/api/shmem/shmem.c index 6ea92d9..0e757a7 100644 --- a/test/common_plat/validation/api/shmem/shmem.c +++ b/test/common_plat/validation/api/shmem/shmem.c @@ -111,6 +111,7 @@ void shmem_test_basic(void) { pthrd_arg thrdarg; odp_shm_t shm; + odp_shm_t shm2; shared_test_data_t *shared_test_data; odp_cpumask_t unused;
@@ -120,7 +121,15 @@ void shmem_test_basic(void) CU_ASSERT(odp_shm_to_u64(shm) != odp_shm_to_u64(ODP_SHM_INVALID));
+ /* also check that another reserve with same name is accepted: */ + shm2 = odp_shm_reserve(MEM_NAME, + sizeof(shared_test_data_t), ALIGN_SIZE, 0); + CU_ASSERT(ODP_SHM_INVALID != shm2); + CU_ASSERT(odp_shm_to_u64(shm2) != + odp_shm_to_u64(ODP_SHM_INVALID)); + CU_ASSERT(0 == odp_shm_free(shm)); + CU_ASSERT(0 == odp_shm_free(shm2)); CU_ASSERT(ODP_SHM_INVALID == odp_shm_lookup(MEM_NAME));
shm = odp_shm_reserve(MEM_NAME,
commit 552e46339939933ee7ed305f1dda82ead362ece9 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:33 2016 +0100
doc: updating docs for the shm interface extension
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/doc/users-guide/users-guide.adoc b/doc/users-guide/users-guide.adoc index 62f5833..078dd7c 100755 --- a/doc/users-guide/users-guide.adoc +++ b/doc/users-guide/users-guide.adoc @@ -649,13 +649,19 @@ mapping the shared memory block. There is no fragmentation. By default ODP threads are assumed to behave as cache coherent systems: Any change performed on a shared memory block is guaranteed to eventually become visible to other ODP threads sharing this memory block. -(this behaviour may be altered by flags to `odp_shm_reserve()` in the future). Nevertheless, there is no implicit memory barrier associated with any action on shared memories: *When* a change performed by an ODP thread becomes visible to another ODP thread is not known: An application using shared memory blocks has to use some memory barrier provided by ODP to guarantee shared data validity between ODP threads.
+The virtual address at which a given memory block is mapped in different ODP +threads may differ from ODP thread to ODP thread, if ODP threads have separate +virtual spaces (for instance if ODP threads are implemented as processes). +However, the ODP_SHM_SINGLE_VA flag can be used at `odp_shm_reserve()` time +to guarantee address uniqueness in all ODP threads, regardless of their +implementation or creation time. + === Lookup by name As mentioned, shared memory handles can be sent from ODP threads to ODP threads using any IPC mechanism, and then the block address retrieved. @@ -698,9 +704,49 @@ if (odp_shm_free(shm) != 0) { } ----
+=== sharing memory with the external world +ODP provides ways of sharing memory with entities located outside +ODP instances: + +Sharing a block of memory with an external (non ODP) thread is achieved +by setting the ODP_SHM_PROC flag at `odp_shm_reserve()` time. +How the memory block is retrieved on the Operating System side is +implementation and Operating System dependent. + +Sharing a block of memory with an external ODP instance (running +on the same Operating System) is achieved +by setting the ODP_SHM_EXPORT flag at `odp_shm_reserve()` time. +A block of memory created with this flag in an ODP instance A, can be "mapped" +into a remote ODP instance B (on the same OS) by using the +`odp_shm_import()`, on ODP instance B: + +.sharing memory between ODP instances: instance A +[source,c] +---- +odp_shm_t shmA; +shmA = odp_shm_reserve("memoryA", size, 0, ODP_SHM_EXPORT); +---- + +.sharing memory between ODP instances: instance B +[source,c] +---- +odp_shm_t shmB; +odp_instance_t odpA; + +/* get ODP A instance handle by some OS method */ +odpA = ... + +/* get the shared memory exported by A: +shmB = odp_shm_import("memoryA", odpA, "memoryB", 0, 0); +---- + +Note that the handles shmA and shmB are scoped by each ODP instance +(you can not use them outside the ODP instance they belong to). +Also note that both ODP instances have to call `odp_shm_free()` when done. + === Memory creation flags The last argument to odp_shm_reserve() is a set of ORed flags. -Two flags are supported: +The following flags are supported:
==== ODP_SHM_PROC When this flag is given, the allocated shared memory will become visible @@ -710,6 +756,12 @@ will be able to access the memory using native (non ODP) OS calls such as Each ODP implementation should provide a description on exactly how this mapping should be done on that specific platform.
+==== ODP_SHM_EXPORT +When this flag is given, the allocated shared memory will become visible +to other ODP instances running on the same OS. +Other ODP instances willing to see this exported memory should use the +`odp_shm_import()` ODP function. + ==== ODP_SHM_SW_ONLY This flag tells ODP that the shared memory will be used by the ODP application software only: no HW (such as DMA, or other accelerator) will ever @@ -719,6 +771,18 @@ implementation), except for `odp_shm_lookup()` and `odp_shm_free()`. ODP implementations may use this flag as a hint for performance optimization, or may as well ignore this flag.
+==== ODP_SHM_SINGLE_VA +This flag is used to guarantee the uniqueness of the address at which +the shared memory is mapped: without this flag, a given memory block may be +mapped at different virtual addresses (assuming the target have virtual +addresses) by different ODP threads. This means that the value returned by +`odp_shm_addr()` would be different in different threads, in this case. +Setting this flag guarantees that all ODP threads sharing this memory +block will see it at the same address (`odp_shm_addr()` would return the +same value on all ODP threads, for a given memory block, in this case) +Note that ODP implementations may have restrictions of the amount of memory +which can be allocated with this flag. + == Queues Queues are the fundamental event sequencing mechanism provided by ODP and all ODP applications make use of them either explicitly or implicitly. Queues are
commit 1d61093f4ea7a9f62cc69e6fdb6fb82b246af817 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:27 2016 +0100
test: api: shmem: new proper tests for shm API
The shmem "sunnydays" tests for the north interface API are replaced with proper tests, testing memory allocation at different time (before and after ODP thread creation, i.e. the tests make sure shmem behaves the same regardless of fork time). The tests also include stress testing trying to provoque race conditions. The new shmem tests do not assume pthreads any longer and are runnable in process mode.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/shmem/shmem.c b/test/common_plat/validation/api/shmem/shmem.c index cbff673..6ea92d9 100644 --- a/test/common_plat/validation/api/shmem/shmem.c +++ b/test/common_plat/validation/api/shmem/shmem.c @@ -7,82 +7,703 @@ #include <odp_api.h> #include <odp_cunit_common.h> #include "shmem.h" +#include <stdlib.h>
-#define ALIGE_SIZE (128) -#define TESTNAME "cunit_test_shared_data" +#define ALIGN_SIZE (128) +#define MEM_NAME "test_shmem" +#define NAME_LEN (sizeof(MEM_NAME) + 20) #define TEST_SHARE_FOO (0xf0f0f0f0) #define TEST_SHARE_BAR (0xf0f0f0f) +#define SMALL_MEM 10 +#define MEDIUM_MEM 4096 +#define BIG_MEM 65536 +#define STRESS_SIZE 32 /* power of 2 and <=256 */ +#define STRESS_RANDOM_SZ 5 +#define STRESS_ITERATION 5000
-static odp_barrier_t test_barrier; +typedef enum { + STRESS_FREE, /* entry is free and can be allocated */ + STRESS_BUSY, /* entry is being processed: don't touch */ + STRESS_ALLOC /* entry is allocated and can be freed */ +} stress_state_t;
-static int run_shm_thread(void *arg ODP_UNUSED) +typedef struct { + stress_state_t state; + odp_shm_t shm; + char name[NAME_LEN]; + void *address; + uint32_t flags; + uint32_t size; + uint64_t align; + uint8_t data_val; +} stress_data_t; + +typedef struct { + odp_barrier_t test_barrier1; + odp_barrier_t test_barrier2; + odp_barrier_t test_barrier3; + odp_barrier_t test_barrier4; + uint32_t foo; + uint32_t bar; + odp_atomic_u32_t index; + uint32_t nb_threads; + odp_shm_t shm[MAX_WORKERS]; + void *address[MAX_WORKERS]; + char name[MAX_WORKERS][NAME_LEN]; + odp_spinlock_t stress_lock; + stress_data_t stress[STRESS_SIZE]; +} shared_test_data_t; + +/* memory stuff expected to fit in a single page */ +typedef struct { + int data[SMALL_MEM]; +} shared_test_data_small_t; + +/* memory stuff expected to fit in a huge page */ +typedef struct { + int data[MEDIUM_MEM]; +} shared_test_data_medium_t; + +/* memory stuff expected to fit in many huge pages */ +typedef struct { + int data[BIG_MEM]; +} shared_test_data_big_t; + +/* + * thread part for the shmem_test_basic test + */ +static int run_test_basic_thread(void *arg ODP_UNUSED) { odp_shm_info_t info; odp_shm_t shm; - test_shared_data_t *test_shared_data; + shared_test_data_t *shared_test_data; int thr;
- odp_barrier_wait(&test_barrier); thr = odp_thread_id(); printf("Thread %i starts\n", thr);
- shm = odp_shm_lookup(TESTNAME); + shm = odp_shm_lookup(MEM_NAME); CU_ASSERT(ODP_SHM_INVALID != shm); - test_shared_data = odp_shm_addr(shm); - CU_ASSERT(TEST_SHARE_FOO == test_shared_data->foo); - CU_ASSERT(TEST_SHARE_BAR == test_shared_data->bar); + shared_test_data = odp_shm_addr(shm); + CU_ASSERT(NULL != shared_test_data); + + odp_barrier_wait(&shared_test_data->test_barrier1); + odp_shm_print_all(); + CU_ASSERT(TEST_SHARE_FOO == shared_test_data->foo); + CU_ASSERT(TEST_SHARE_BAR == shared_test_data->bar); CU_ASSERT(0 == odp_shm_info(shm, &info)); - CU_ASSERT(0 == strcmp(TESTNAME, info.name)); + CU_ASSERT(0 == strcmp(MEM_NAME, info.name)); CU_ASSERT(0 == info.flags); - CU_ASSERT(test_shared_data == info.addr); - CU_ASSERT(sizeof(test_shared_data_t) <= info.size); -#ifdef MAP_HUGETLB - CU_ASSERT(odp_sys_huge_page_size() == info.page_size); -#else - CU_ASSERT(odp_sys_page_size() == info.page_size); -#endif + CU_ASSERT(shared_test_data == info.addr); + CU_ASSERT(sizeof(shared_test_data_t) <= info.size); + CU_ASSERT((info.page_size == odp_sys_huge_page_size()) || + (info.page_size == odp_sys_page_size())) odp_shm_print_all();
fflush(stdout); return CU_get_number_of_failures(); }
-void shmem_test_odp_shm_sunnyday(void) +/* + * test basic things: shmem creation, info, share, and free + */ +void shmem_test_basic(void) { pthrd_arg thrdarg; odp_shm_t shm; - test_shared_data_t *test_shared_data; + shared_test_data_t *shared_test_data; odp_cpumask_t unused;
- shm = odp_shm_reserve(TESTNAME, - sizeof(test_shared_data_t), ALIGE_SIZE, 0); + shm = odp_shm_reserve(MEM_NAME, + sizeof(shared_test_data_t), ALIGN_SIZE, 0); CU_ASSERT(ODP_SHM_INVALID != shm); - CU_ASSERT(odp_shm_to_u64(shm) != odp_shm_to_u64(ODP_SHM_INVALID)); + CU_ASSERT(odp_shm_to_u64(shm) != + odp_shm_to_u64(ODP_SHM_INVALID));
CU_ASSERT(0 == odp_shm_free(shm)); - CU_ASSERT(ODP_SHM_INVALID == odp_shm_lookup(TESTNAME)); + CU_ASSERT(ODP_SHM_INVALID == odp_shm_lookup(MEM_NAME));
- shm = odp_shm_reserve(TESTNAME, - sizeof(test_shared_data_t), ALIGE_SIZE, 0); + shm = odp_shm_reserve(MEM_NAME, + sizeof(shared_test_data_t), ALIGN_SIZE, 0); CU_ASSERT(ODP_SHM_INVALID != shm);
- test_shared_data = odp_shm_addr(shm); - CU_ASSERT_FATAL(NULL != test_shared_data); - test_shared_data->foo = TEST_SHARE_FOO; - test_shared_data->bar = TEST_SHARE_BAR; + shared_test_data = odp_shm_addr(shm); + CU_ASSERT_FATAL(NULL != shared_test_data); + shared_test_data->foo = TEST_SHARE_FOO; + shared_test_data->bar = TEST_SHARE_BAR;
thrdarg.numthrds = odp_cpumask_default_worker(&unused, 0);
if (thrdarg.numthrds > MAX_WORKERS) thrdarg.numthrds = MAX_WORKERS;
- odp_barrier_init(&test_barrier, thrdarg.numthrds); - odp_cunit_thread_create(run_shm_thread, &thrdarg); + odp_barrier_init(&shared_test_data->test_barrier1, thrdarg.numthrds); + odp_cunit_thread_create(run_test_basic_thread, &thrdarg); CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); + + CU_ASSERT(0 == odp_shm_free(shm)); +} + +/* + * thread part for the shmem_test_reserve_after_fork + */ +static int run_test_reserve_after_fork(void *arg ODP_UNUSED) +{ + odp_shm_t shm; + shared_test_data_t *glob_data; + int thr; + int thr_index; + int size; + shared_test_data_small_t *pattern_small; + shared_test_data_medium_t *pattern_medium; + shared_test_data_big_t *pattern_big; + int i; + + thr = odp_thread_id(); + printf("Thread %i starts\n", thr); + + shm = odp_shm_lookup(MEM_NAME); + glob_data = odp_shm_addr(shm); + + /* + * odp_thread_id are not guaranteed to be consecutive, so we create + * a consecutive ID + */ + thr_index = odp_atomic_fetch_inc_u32(&glob_data->index); + + /* allocate some memory (of different sizes) and fill with pattern */ + snprintf(glob_data->name[thr_index], NAME_LEN, "%s-%09d", + MEM_NAME, thr_index); + switch (thr_index % 3) { + case 0: + size = sizeof(shared_test_data_small_t); + shm = odp_shm_reserve(glob_data->name[thr_index], size, 0, 0); + CU_ASSERT(ODP_SHM_INVALID != shm); + glob_data->shm[thr_index] = shm; + pattern_small = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(pattern_small); + for (i = 0; i < SMALL_MEM; i++) + pattern_small->data[i] = i; + break; + case 1: + size = sizeof(shared_test_data_medium_t); + shm = odp_shm_reserve(glob_data->name[thr_index], size, 0, 0); + CU_ASSERT(ODP_SHM_INVALID != shm); + glob_data->shm[thr_index] = shm; + pattern_medium = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(pattern_medium); + for (i = 0; i < MEDIUM_MEM; i++) + pattern_medium->data[i] = (i << 2); + break; + case 2: + size = sizeof(shared_test_data_big_t); + shm = odp_shm_reserve(glob_data->name[thr_index], size, 0, 0); + CU_ASSERT(ODP_SHM_INVALID != shm); + glob_data->shm[thr_index] = shm; + pattern_big = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(pattern_big); + for (i = 0; i < BIG_MEM; i++) + pattern_big->data[i] = (i >> 2); + break; + } + + /* print block address */ + printf("In thread: Block index: %d mapped at %lx\n", + thr_index, (long int)odp_shm_addr(shm)); + + odp_barrier_wait(&glob_data->test_barrier1); + odp_barrier_wait(&glob_data->test_barrier2); + + fflush(stdout); + return CU_get_number_of_failures(); +} + +/* + * test sharing memory reserved after odp_thread creation (e.g. fork()): + */ +void shmem_test_reserve_after_fork(void) +{ + pthrd_arg thrdarg; + odp_shm_t shm; + odp_shm_t thr_shm; + shared_test_data_t *glob_data; + odp_cpumask_t unused; + int thr_index; + int i; + void *address; + shared_test_data_small_t *pattern_small; + shared_test_data_medium_t *pattern_medium; + shared_test_data_big_t *pattern_big; + + shm = odp_shm_reserve(MEM_NAME, sizeof(shared_test_data_t), 0, 0); + CU_ASSERT(ODP_SHM_INVALID != shm); + glob_data = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + thrdarg.numthrds = odp_cpumask_default_worker(&unused, 0); + if (thrdarg.numthrds > MAX_WORKERS) + thrdarg.numthrds = MAX_WORKERS; + + odp_barrier_init(&glob_data->test_barrier1, thrdarg.numthrds + 1); + odp_barrier_init(&glob_data->test_barrier2, thrdarg.numthrds + 1); + odp_atomic_store_u32(&glob_data->index, 0); + + odp_cunit_thread_create(run_test_reserve_after_fork, &thrdarg); + + /* wait until all threads have made their shm_reserve: */ + odp_barrier_wait(&glob_data->test_barrier1); + + /* perform a lookup of all memories: */ + for (thr_index = 0; thr_index < thrdarg.numthrds; thr_index++) { + thr_shm = odp_shm_lookup(glob_data->name[thr_index]); + CU_ASSERT(thr_shm == glob_data->shm[thr_index]); + } + + /* check that the patterns are correct: */ + for (thr_index = 0; thr_index < thrdarg.numthrds; thr_index++) { + switch (thr_index % 3) { + case 0: + pattern_small = + odp_shm_addr(glob_data->shm[thr_index]); + CU_ASSERT_PTR_NOT_NULL(pattern_small); + for (i = 0; i < SMALL_MEM; i++) + CU_ASSERT(pattern_small->data[i] == i); + break; + case 1: + pattern_medium = + odp_shm_addr(glob_data->shm[thr_index]); + CU_ASSERT_PTR_NOT_NULL(pattern_medium); + for (i = 0; i < MEDIUM_MEM; i++) + CU_ASSERT(pattern_medium->data[i] == (i << 2)); + break; + case 2: + pattern_big = + odp_shm_addr(glob_data->shm[thr_index]); + CU_ASSERT_PTR_NOT_NULL(pattern_big); + for (i = 0; i < BIG_MEM; i++) + CU_ASSERT(pattern_big->data[i] == (i >> 2)); + break; + } + } + + /* + * print the mapping address of the blocks + */ + for (thr_index = 0; thr_index < thrdarg.numthrds; thr_index++) { + address = odp_shm_addr(glob_data->shm[thr_index]); + printf("In main Block index: %d mapped at %lx\n", + thr_index, (long int)address); + } + + /* unblock the threads and let them terminate (no free is done): */ + odp_barrier_wait(&glob_data->test_barrier2); + + /* at the same time, (race),free of all memories: */ + for (thr_index = 0; thr_index < thrdarg.numthrds; thr_index++) { + thr_shm = glob_data->shm[thr_index]; + CU_ASSERT(odp_shm_free(thr_shm) == 0); + } + + /* wait for all thread endings: */ + CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); + + /* just glob_data should remain: */ + + CU_ASSERT(0 == odp_shm_free(shm)); +} + +/* + * thread part for the shmem_test_singleva_after_fork + */ +static int run_test_singleva_after_fork(void *arg ODP_UNUSED) +{ + odp_shm_t shm; + shared_test_data_t *glob_data; + int thr; + int thr_index; + int size; + shared_test_data_small_t *pattern_small; + shared_test_data_medium_t *pattern_medium; + shared_test_data_big_t *pattern_big; + uint32_t i; + int ret; + + thr = odp_thread_id(); + printf("Thread %i starts\n", thr); + + shm = odp_shm_lookup(MEM_NAME); + glob_data = odp_shm_addr(shm); + + /* + * odp_thread_id are not guaranteed to be consecutive, so we create + * a consecutive ID + */ + thr_index = odp_atomic_fetch_inc_u32(&glob_data->index); + + /* allocate some memory (of different sizes) and fill with pattern */ + snprintf(glob_data->name[thr_index], NAME_LEN, "%s-%09d", + MEM_NAME, thr_index); + switch (thr_index % 3) { + case 0: + size = sizeof(shared_test_data_small_t); + shm = odp_shm_reserve(glob_data->name[thr_index], size, + 0, ODP_SHM_SINGLE_VA); + CU_ASSERT(ODP_SHM_INVALID != shm); + glob_data->shm[thr_index] = shm; + pattern_small = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(pattern_small); + glob_data->address[thr_index] = (void *)pattern_small; + for (i = 0; i < SMALL_MEM; i++) + pattern_small->data[i] = i; + break; + case 1: + size = sizeof(shared_test_data_medium_t); + shm = odp_shm_reserve(glob_data->name[thr_index], size, + 0, ODP_SHM_SINGLE_VA); + CU_ASSERT(ODP_SHM_INVALID != shm); + glob_data->shm[thr_index] = shm; + pattern_medium = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(pattern_medium); + glob_data->address[thr_index] = (void *)pattern_medium; + for (i = 0; i < MEDIUM_MEM; i++) + pattern_medium->data[i] = (i << 2); + break; + case 2: + size = sizeof(shared_test_data_big_t); + shm = odp_shm_reserve(glob_data->name[thr_index], size, + 0, ODP_SHM_SINGLE_VA); + CU_ASSERT(ODP_SHM_INVALID != shm); + glob_data->shm[thr_index] = shm; + pattern_big = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(pattern_big); + glob_data->address[thr_index] = (void *)pattern_big; + for (i = 0; i < BIG_MEM; i++) + pattern_big->data[i] = (i >> 2); + break; + } + + /* print block address */ + printf("In thread: Block index: %d mapped at %lx\n", + thr_index, (long int)odp_shm_addr(shm)); + + odp_barrier_wait(&glob_data->test_barrier1); + odp_barrier_wait(&glob_data->test_barrier2); + + /* map each-other block, checking common address: */ + for (i = 0; i < glob_data->nb_threads; i++) { + shm = odp_shm_lookup(glob_data->name[i]); + CU_ASSERT(shm == glob_data->shm[i]); + CU_ASSERT(odp_shm_addr(shm) == glob_data->address[i]); + } + + /* wait for main control task and free the allocated block */ + odp_barrier_wait(&glob_data->test_barrier3); + odp_barrier_wait(&glob_data->test_barrier4); + ret = odp_shm_free(glob_data->shm[thr_index]); + CU_ASSERT(ret == 0); + + fflush(stdout); + return CU_get_number_of_failures(); +} + +/* + * test sharing memory reserved after odp_thread creation (e.g. fork()): + * with single VA flag. + */ +void shmem_test_singleva_after_fork(void) +{ + pthrd_arg thrdarg; + odp_shm_t shm; + odp_shm_t thr_shm; + shared_test_data_t *glob_data; + odp_cpumask_t unused; + int thr_index; + int i; + void *address; + shared_test_data_small_t *pattern_small; + shared_test_data_medium_t *pattern_medium; + shared_test_data_big_t *pattern_big; + + shm = odp_shm_reserve(MEM_NAME, sizeof(shared_test_data_t), + 0, 0); + CU_ASSERT(ODP_SHM_INVALID != shm); + glob_data = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + thrdarg.numthrds = odp_cpumask_default_worker(&unused, 0); + if (thrdarg.numthrds > MAX_WORKERS) + thrdarg.numthrds = MAX_WORKERS; + + glob_data->nb_threads = thrdarg.numthrds; + odp_barrier_init(&glob_data->test_barrier1, thrdarg.numthrds + 1); + odp_barrier_init(&glob_data->test_barrier2, thrdarg.numthrds + 1); + odp_barrier_init(&glob_data->test_barrier3, thrdarg.numthrds + 1); + odp_barrier_init(&glob_data->test_barrier4, thrdarg.numthrds + 1); + odp_atomic_store_u32(&glob_data->index, 0); + + odp_cunit_thread_create(run_test_singleva_after_fork, &thrdarg); + + /* wait until all threads have made their shm_reserve: */ + odp_barrier_wait(&glob_data->test_barrier1); + + /* perform a lookup of all memories: */ + for (thr_index = 0; thr_index < thrdarg.numthrds; thr_index++) { + thr_shm = odp_shm_lookup(glob_data->name[thr_index]); + CU_ASSERT(thr_shm == glob_data->shm[thr_index]); + } + + /* check that the patterns are correct: */ + for (thr_index = 0; thr_index < thrdarg.numthrds; thr_index++) { + switch (thr_index % 3) { + case 0: + pattern_small = + odp_shm_addr(glob_data->shm[thr_index]); + CU_ASSERT_PTR_NOT_NULL(pattern_small); + for (i = 0; i < SMALL_MEM; i++) + CU_ASSERT(pattern_small->data[i] == i); + break; + case 1: + pattern_medium = + odp_shm_addr(glob_data->shm[thr_index]); + CU_ASSERT_PTR_NOT_NULL(pattern_medium); + for (i = 0; i < MEDIUM_MEM; i++) + CU_ASSERT(pattern_medium->data[i] == (i << 2)); + break; + case 2: + pattern_big = + odp_shm_addr(glob_data->shm[thr_index]); + CU_ASSERT_PTR_NOT_NULL(pattern_big); + for (i = 0; i < BIG_MEM; i++) + CU_ASSERT(pattern_big->data[i] == (i >> 2)); + break; + } + } + + /* + * check that the mapping address is common to all (SINGLE_VA): + */ + for (thr_index = 0; thr_index < thrdarg.numthrds; thr_index++) { + address = odp_shm_addr(glob_data->shm[thr_index]); + CU_ASSERT(glob_data->address[thr_index] == address); + } + + /* unblock the threads and let them map each-other blocks: */ + odp_barrier_wait(&glob_data->test_barrier2); + + /* then check mem status */ + odp_barrier_wait(&glob_data->test_barrier3); + + /* unblock the threads and let them free all thread blocks: */ + odp_barrier_wait(&glob_data->test_barrier4); + + /* wait for all thread endings: */ + CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); + + /* just glob_data should remain: */ + + CU_ASSERT(0 == odp_shm_free(shm)); +} + +/* + * thread part for the shmem_test_stress + */ +static int run_test_stress(void *arg ODP_UNUSED) +{ + odp_shm_t shm; + uint8_t *address; + shared_test_data_t *glob_data; + uint8_t random_bytes[STRESS_RANDOM_SZ]; + uint32_t index; + uint32_t size; + uint64_t align; + uint32_t flags; + uint8_t data; + uint32_t iter; + uint32_t i; + + shm = odp_shm_lookup(MEM_NAME); + glob_data = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + /* wait for general GO! */ + odp_barrier_wait(&glob_data->test_barrier1); + + /* + * at each iteration: pick up a random index for + * glob_data->stress[index]: If the entry is free, allocated mem + * randomly. If it is already allocated, make checks and free it: + * Note that different tread can allocate or free a given block + */ + for (iter = 0; iter < STRESS_ITERATION; iter++) { + /* get 4 random bytes from which index, size ,align, flags + * and data will be derived: + */ + odp_random_data(random_bytes, STRESS_RANDOM_SZ, 0); + index = random_bytes[0] & (STRESS_SIZE - 1); + + odp_spinlock_lock(&glob_data->stress_lock); + + switch (glob_data->stress[index].state) { + case STRESS_FREE: + /* allocated a new block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + + size = (random_bytes[1] + 1) << 6; /* up to 16Kb */ + /* we just play with the VA flag. randomly setting + * the mlock flag may exceed user ulimit -l + */ + flags = random_bytes[2] & ODP_SHM_SINGLE_VA; + align = (random_bytes[3] + 1) << 6;/* up to 16Kb */ + data = random_bytes[4]; + + snprintf(glob_data->stress[index].name, NAME_LEN, + "%s-%09d", MEM_NAME, index); + shm = odp_shm_reserve(glob_data->stress[index].name, + size, align, flags); + glob_data->stress[index].shm = shm; + if (shm == ODP_SHM_INVALID) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + address = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(address); + glob_data->stress[index].address = address; + glob_data->stress[index].flags = flags; + glob_data->stress[index].size = size; + glob_data->stress[index].align = align; + glob_data->stress[index].data_val = data; + + /* write some data: writing each byte would be a + * waste of time: just make sure each page is reached */ + for (i = 0; i < size; i += 256) + address[i] = (data++) & 0xFF; + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_ALLOC; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_ALLOC: + /* free the block for this entry */ + + glob_data->stress[index].state = STRESS_BUSY; + odp_spinlock_unlock(&glob_data->stress_lock); + shm = glob_data->stress[index].shm; + + if (shm == ODP_SHM_INVALID) { /* out of mem ? */ + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + continue; + } + + CU_ASSERT(odp_shm_lookup(glob_data->stress[index].name) + != 0); + + address = odp_shm_addr(shm); + CU_ASSERT_PTR_NOT_NULL(address); + + align = glob_data->stress[index].align; + if (align) { + align = glob_data->stress[index].align; + CU_ASSERT(((uintptr_t)address & (align - 1)) + == 0) + } + + flags = glob_data->stress[index].flags; + if (flags & ODP_SHM_SINGLE_VA) + CU_ASSERT(glob_data->stress[index].address == + address) + + /* check that data is reachable and correct: */ + data = glob_data->stress[index].data_val; + size = glob_data->stress[index].size; + for (i = 0; i < size; i += 256) { + CU_ASSERT(address[i] == (data & 0xFF)); + data++; + } + + CU_ASSERT(!odp_shm_free(glob_data->stress[index].shm)); + + odp_spinlock_lock(&glob_data->stress_lock); + glob_data->stress[index].state = STRESS_FREE; + odp_spinlock_unlock(&glob_data->stress_lock); + + break; + + case STRESS_BUSY: + default: + odp_spinlock_unlock(&glob_data->stress_lock); + break; + } + } + + fflush(stdout); + return CU_get_number_of_failures(); +} + +/* + * stress tests + */ +void shmem_test_stress(void) +{ + pthrd_arg thrdarg; + odp_shm_t shm; + odp_shm_t globshm; + shared_test_data_t *glob_data; + odp_cpumask_t unused; + uint32_t i; + + globshm = odp_shm_reserve(MEM_NAME, sizeof(shared_test_data_t), + 0, 0); + CU_ASSERT(ODP_SHM_INVALID != globshm); + glob_data = odp_shm_addr(globshm); + CU_ASSERT_PTR_NOT_NULL(glob_data); + + thrdarg.numthrds = odp_cpumask_default_worker(&unused, 0); + if (thrdarg.numthrds > MAX_WORKERS) + thrdarg.numthrds = MAX_WORKERS; + + glob_data->nb_threads = thrdarg.numthrds; + odp_barrier_init(&glob_data->test_barrier1, thrdarg.numthrds); + odp_spinlock_init(&glob_data->stress_lock); + + /* before starting the threads, mark all entries as free: */ + for (i = 0; i < STRESS_SIZE; i++) + glob_data->stress[i].state = STRESS_FREE; + + /* create threads */ + odp_cunit_thread_create(run_test_stress, &thrdarg); + + /* wait for all thread endings: */ + CU_ASSERT(odp_cunit_thread_exit(&thrdarg) >= 0); + + /* release left overs: */ + for (i = 0; i < STRESS_SIZE; i++) { + shm = glob_data->stress[i].shm; + if ((glob_data->stress[i].state == STRESS_ALLOC) && + (glob_data->stress[i].shm != ODP_SHM_INVALID)) { + CU_ASSERT(odp_shm_lookup(glob_data->stress[i].name) == + shm); + CU_ASSERT(!odp_shm_free(shm)); + } + } + + CU_ASSERT(0 == odp_shm_free(globshm)); + + /* check that no memory is left over: */ }
odp_testinfo_t shmem_suite[] = { - ODP_TEST_INFO(shmem_test_odp_shm_sunnyday), + ODP_TEST_INFO(shmem_test_basic), + ODP_TEST_INFO(shmem_test_reserve_after_fork), + ODP_TEST_INFO(shmem_test_singleva_after_fork), + ODP_TEST_INFO(shmem_test_stress), ODP_TEST_INFO_NULL, };
diff --git a/test/common_plat/validation/api/shmem/shmem.h b/test/common_plat/validation/api/shmem/shmem.h index a5893d9..092aa80 100644 --- a/test/common_plat/validation/api/shmem/shmem.h +++ b/test/common_plat/validation/api/shmem/shmem.h @@ -10,7 +10,10 @@ #include <odp_cunit_common.h>
/* test functions: */ -void shmem_test_odp_shm_sunnyday(void); +void shmem_test_basic(void); +void shmem_test_reserve_after_fork(void); +void shmem_test_singleva_after_fork(void); +void shmem_test_stress(void);
/* test arrays: */ extern odp_testinfo_t shmem_suite[];
commit 6d78d33df6d33ebe1b933383c4858df5e9f7f33b Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:25 2016 +0100
api: shm: add flags to shm_reserve and function to find external mem
The ODP_SHM_SINGLE_VA flag is created: when set (at odp_shm_reserve()), this flag guarantees that all ODP threads sharing this memory block will see the block at the same address (regadless of ODP thread type -pthread vs process- or fork time)
The flag ODP_SHM_EXPORT is added: when passed at odp_shm_reserve() time the memory block becomes visible to other ODP instances. The function odp_shm_import() is added: this function enables to reserve block of memories exported by other ODP instances (using the ODP_SHM_EXPORT flag).
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/shared_memory.h b/include/odp/api/spec/shared_memory.h index 8c76807..885751d 100644 --- a/include/odp/api/spec/shared_memory.h +++ b/include/odp/api/spec/shared_memory.h @@ -14,6 +14,7 @@ #ifndef ODP_API_SHARED_MEMORY_H_ #define ODP_API_SHARED_MEMORY_H_ #include <odp/visibility_begin.h> +#include <odp/api/init.h>
#ifdef __cplusplus extern "C" { @@ -43,12 +44,25 @@ extern "C" { #define ODP_SHM_NAME_LEN 32
/* - * Shared memory flags + * Shared memory flags: */ - -/* Share level */ -#define ODP_SHM_SW_ONLY 0x1 /**< Application SW only, no HW access */ -#define ODP_SHM_PROC 0x2 /**< Share with external processes */ +#define ODP_SHM_SW_ONLY 0x1 /**< Application SW only, no HW access */ +#define ODP_SHM_PROC 0x2 /**< Share with external processes */ +/** + * Single virtual address + * + * When set, this flag guarantees that all ODP threads sharing this + * memory block will see the block at the same address - regardless + * of ODP thread type (e.g. pthread vs. process (or fork process time)). + */ +#define ODP_SHM_SINGLE_VA 0x4 +/** + * Export memory + * + * When set, the memory block becomes visible to other ODP instances + * through odp_shm_import(). + */ +#define ODP_SHM_EXPORT 0x08
/** * Shared memory block info @@ -135,6 +149,28 @@ int odp_shm_free(odp_shm_t shm); */ odp_shm_t odp_shm_lookup(const char *name);
+/** + * Import a block of shared memory, exported by another ODP instance + * + * This call creates a new handle for accessing a shared memory block created + * (with ODP_SHM_EXPORT flag) by another ODP instance. An instance may have + * only a single handle to the same block. Application must not access the + * block after freeing the handle. When an imported handle is freed, only + * the calling instance is affected. The exported block may be freed only + * after all other instances have stopped accessing the block. + * + * @param remote_name Name of the block, in the remote ODP instance + * @param odp_inst Remote ODP instance, as returned by odp_init_global() + * @param local_name Name given to the block, in the local ODP instance + * May be NULL, if the application doesn't need a name + * (for a lookup). + * + * @return A handle to access a block exported by another ODP instance. + * @retval ODP_SHM_INVALID on failure + */ +odp_shm_t odp_shm_import(const char *remote_name, + odp_instance_t odp_inst, + const char *local_name);
/** * Shared memory block address
commit 4cf84d158adc7e84bed69ceac34bbbb3dee9587e Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:24 2016 +0100
linux-gen: push internal flag definition
File platform/linux-generic/include/odp_shm_internal.h exposes shm internals used by IPC. The bits used by the internal flags are moved to make room for more "official" values. The platform/linux-generic/include/odp_shm_internal.h file should really be removed when _ishm is used, but as long as we have the current IPC, removing the file would break compilation.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_shm_internal.h b/platform/linux-generic/include/odp_shm_internal.h index 30e60f7..8bd105d 100644 --- a/platform/linux-generic/include/odp_shm_internal.h +++ b/platform/linux-generic/include/odp_shm_internal.h @@ -16,8 +16,8 @@ extern "C" { #define SHM_DEVNAME_MAXLEN (ODP_SHM_NAME_LEN + 16) #define SHM_DEVNAME_FORMAT "/odp-%d-%s" /* /dev/shm/odp-<pid>-<name> */
-#define _ODP_SHM_PROC_NOCREAT 0x4 /**< Do not create shm if not exist */ -#define _ODP_SHM_O_EXCL 0x8 /**< Do not create shm if exist */ +#define _ODP_SHM_PROC_NOCREAT 0x40 /**< Do not create shm if not exist */ +#define _ODP_SHM_O_EXCL 0x80 /**< Do not create shm if exist */
#ifdef __cplusplus }
commit 67abee1a4548878ccc93b57fbd84c3fe68147bf6 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Nov 24 17:22:20 2016 +0100
linux-gen: init: removing possible obsolete ODP files at startup
When an ODP program is killed, some odp files may remain in /tmp and the huge page mount point. As signal KILL cannot be caught, there is not much one can do to prevent that. But when an new odp session is started, all files prefixed with the opd prefix ("odp-<PID>-") can be safely removed as the PID is unique and therefore, there cannot be another ODP instance with the same PID. This patch does this cleanup at startup.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_init.c b/platform/linux-generic/odp_init.c index 1129779..fb85cc1 100644 --- a/platform/linux-generic/odp_init.c +++ b/platform/linux-generic/odp_init.c @@ -10,15 +10,71 @@ #include <odp_internal.h> #include <odp_schedule_if.h> #include <string.h> +#include <stdio.h> +#include <linux/limits.h> +#include <dirent.h> +#include <unistd.h> +#include <string.h> +#include <stdlib.h> +#include <errno.h> + +#define _ODP_FILES_FMT "odp-%d-" +#define _ODP_TMPDIR "/tmp"
struct odp_global_data_s odp_global_data;
+/* remove all files staring with "odp-<pid>" from a directory "dir" */ +static int cleanup_files(const char *dirpath, int odp_pid) +{ + struct dirent *e; + DIR *dir; + char prefix[PATH_MAX]; + char *fullpath; + int d_len = strlen(dirpath); + int p_len; + int f_len; + + dir = opendir(dirpath); + if (!dir) { + /* ok if the dir does not exist. no much to delete then! */ + ODP_DBG("opendir failed for %s: %s\n", + dirpath, strerror(errno)); + return 0; + } + snprintf(prefix, PATH_MAX, _ODP_FILES_FMT, odp_pid); + p_len = strlen(prefix); + while ((e = readdir(dir)) != NULL) { + if (strncmp(e->d_name, prefix, p_len) == 0) { + f_len = strlen(e->d_name); + fullpath = malloc(d_len + f_len + 2); + if (fullpath == NULL) { + closedir(dir); + return -1; + } + snprintf(fullpath, PATH_MAX, "%s/%s", + dirpath, e->d_name); + ODP_DBG("deleting obsolete file: %s\n", fullpath); + if (unlink(fullpath)) + ODP_ERR("unlink failed for %s: %s\n", + fullpath, strerror(errno)); + free(fullpath); + } + } + closedir(dir); + + return 0; +} + int odp_init_global(odp_instance_t *instance, const odp_init_t *params, const odp_platform_init_t *platform_params) { + char *hpdir; + memset(&odp_global_data, 0, sizeof(struct odp_global_data_s)); odp_global_data.main_pid = getpid(); + cleanup_files(_ODP_TMPDIR, odp_global_data.main_pid); + if (platform_params) odp_global_data.ipc_ns = platform_params->ipc_ns;
@@ -49,6 +105,10 @@ int odp_init_global(odp_instance_t *instance, ODP_ERR("ODP system_info init failed.\n"); goto init_failed; } + hpdir = odp_global_data.hugepage_info.default_huge_page_dir; + /* cleanup obsolete huge page files, if any */ + if (hpdir) + cleanup_files(hpdir, odp_global_data.main_pid); stage = SYSINFO_INIT;
if (_odp_fdserver_init_global()) {
commit afb2ecf5e45e10b0e1258c85fc1b80f8ce447646 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:45 2016 +0200
linux-gen: packet: enable multi-segment packets
Enable segmentation support with CONFIG_PACKET_MAX_SEGS configuration option.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_config_internal.h b/platform/linux-generic/include/odp_config_internal.h index ee51c7f..9401fa1 100644 --- a/platform/linux-generic/include/odp_config_internal.h +++ b/platform/linux-generic/include/odp_config_internal.h @@ -70,12 +70,12 @@ extern "C" { /* * Maximum number of segments per packet */ -#define CONFIG_PACKET_MAX_SEGS 1 +#define CONFIG_PACKET_MAX_SEGS 2
/* * Maximum packet segment size including head- and tailrooms */ -#define CONFIG_PACKET_SEG_SIZE (64 * 1024) +#define CONFIG_PACKET_SEG_SIZE (8 * 1024)
/* Maximum data length in a segment *
commit fe9c6cc8e5e88e068ee9f1f4dc29b7f32411f4d7 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:44 2016 +0200
linux-gen: pool: check pool parameters
Check pool parameters against maximum capabilities. Also defined a limit for maximum buffer and user area sizes. Chose 10 MB as a limit since it's small enough to be available in all Linux systems and it should be more than enough for normal pool usage.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 7c462e5..4be3827 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -29,6 +29,9 @@ #define CACHE_BURST 32 #define RING_SIZE_MIN (2 * CACHE_BURST)
+/* Define a practical limit for contiguous memory allocations */ +#define MAX_SIZE (10 * 1024 * 1024) + ODP_STATIC_ASSERT(CONFIG_POOL_CACHE_SIZE > (2 * CACHE_BURST), "cache_burst_size_too_large_compared_to_cache_size");
@@ -426,6 +429,71 @@ error: return ODP_POOL_INVALID; }
+static int check_params(odp_pool_param_t *params) +{ + odp_pool_capability_t capa; + + odp_pool_capability(&capa); + + switch (params->type) { + case ODP_POOL_BUFFER: + if (params->buf.num > capa.buf.max_num) { + printf("buf.num too large %u\n", params->buf.num); + return -1; + } + + if (params->buf.size > capa.buf.max_size) { + printf("buf.size too large %u\n", params->buf.size); + return -1; + } + + if (params->buf.align > capa.buf.max_align) { + printf("buf.align too large %u\n", params->buf.align); + return -1; + } + + break; + + case ODP_POOL_PACKET: + if (params->pkt.len > capa.pkt.max_len) { + printf("pkt.len too large %u\n", params->pkt.len); + return -1; + } + + if (params->pkt.max_len > capa.pkt.max_len) { + printf("pkt.max_len too large %u\n", + params->pkt.max_len); + return -1; + } + + if (params->pkt.seg_len > capa.pkt.max_seg_len) { + printf("pkt.seg_len too large %u\n", + params->pkt.seg_len); + return -1; + } + + if (params->pkt.uarea_size > capa.pkt.max_uarea_size) { + printf("pkt.uarea_size too large %u\n", + params->pkt.uarea_size); + return -1; + } + + break; + + case ODP_POOL_TIMEOUT: + if (params->tmo.num > capa.tmo.max_num) { + printf("tmo.num too large %u\n", params->tmo.num); + return -1; + } + break; + + default: + printf("bad pool type %i\n", params->type); + return -1; + } + + return 0; +}
odp_pool_t odp_pool_create(const char *name, odp_pool_param_t *params) { @@ -433,6 +501,9 @@ odp_pool_t odp_pool_create(const char *name, odp_pool_param_t *params) if (params && (params->type == ODP_POOL_PACKET)) return pool_create(name, params, ODP_SHM_PROC); #endif + if (check_params(params)) + return ODP_POOL_INVALID; + return pool_create(name, params, 0); }
@@ -718,7 +789,7 @@ int odp_pool_capability(odp_pool_capability_t *capa) /* Buffer pools */ capa->buf.max_pools = ODP_CONFIG_POOLS; capa->buf.max_align = ODP_CONFIG_BUFFER_ALIGN_MAX; - capa->buf.max_size = 0; + capa->buf.max_size = MAX_SIZE; capa->buf.max_num = CONFIG_POOL_MAX_NUM;
/* Packet pools */ @@ -730,7 +801,7 @@ int odp_pool_capability(odp_pool_capability_t *capa) capa->pkt.max_segs_per_pkt = CONFIG_PACKET_MAX_SEGS; capa->pkt.min_seg_len = max_seg_len; capa->pkt.max_seg_len = max_seg_len; - capa->pkt.max_uarea_size = 0; + capa->pkt.max_uarea_size = MAX_SIZE;
/* Timeout pools */ capa->tmo.max_pools = ODP_CONFIG_POOLS;
commit 6b5b78245c2ebbc0c907dba9809e9002c7214959 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:43 2016 +0200
validation: pktio: honour pool capability limits
Check pool capability limits for packet length and segment length, and do not exceed those.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/pktio/pktio.c b/test/common_plat/validation/api/pktio/pktio.c index edabd01..c23e2cc 100644 --- a/test/common_plat/validation/api/pktio/pktio.c +++ b/test/common_plat/validation/api/pktio/pktio.c @@ -122,8 +122,12 @@ static inline void _pktio_wait_linkup(odp_pktio_t pktio) } }
-static void set_pool_len(odp_pool_param_t *params) +static void set_pool_len(odp_pool_param_t *params, odp_pool_capability_t *capa) { + uint32_t seg_len; + + seg_len = capa->pkt.max_seg_len; + switch (pool_segmentation) { case PKT_POOL_SEGMENTED: /* Force segment to minimum size */ @@ -132,7 +136,7 @@ static void set_pool_len(odp_pool_param_t *params) break; case PKT_POOL_UNSEGMENTED: default: - params->pkt.seg_len = PKT_BUF_SIZE; + params->pkt.seg_len = seg_len; params->pkt.len = PKT_BUF_SIZE; break; } @@ -312,13 +316,17 @@ static int pktio_fixup_checksums(odp_packet_t pkt) static int default_pool_create(void) { odp_pool_param_t params; + odp_pool_capability_t pool_capa; char pool_name[ODP_POOL_NAME_LEN];
+ if (odp_pool_capability(&pool_capa) != 0) + return -1; + if (default_pkt_pool != ODP_POOL_INVALID) return -1;
odp_pool_param_init(¶ms); - set_pool_len(¶ms); + set_pool_len(¶ms, &pool_capa); params.pkt.num = PKT_BUF_NUM; params.type = ODP_POOL_PACKET;
@@ -601,6 +609,7 @@ static void pktio_txrx_multi(pktio_info_t *pktio_a, pktio_info_t *pktio_b, int i, ret, num_rx;
if (packet_len == USE_MTU) { + odp_pool_capability_t pool_capa; uint32_t mtu;
mtu = odp_pktio_mtu(pktio_a->id); @@ -610,6 +619,11 @@ static void pktio_txrx_multi(pktio_info_t *pktio_a, pktio_info_t *pktio_b, packet_len = mtu; if (packet_len > PKT_LEN_MAX) packet_len = PKT_LEN_MAX; + + CU_ASSERT_FATAL(odp_pool_capability(&pool_capa) == 0); + + if (packet_len > pool_capa.pkt.max_len) + packet_len = pool_capa.pkt.max_len; }
/* generate test packets to send */ @@ -2011,9 +2025,13 @@ static int create_pool(const char *iface, int num) { char pool_name[ODP_POOL_NAME_LEN]; odp_pool_param_t params; + odp_pool_capability_t pool_capa; + + if (odp_pool_capability(&pool_capa) != 0) + return -1;
odp_pool_param_init(¶ms); - set_pool_len(¶ms); + set_pool_len(¶ms, &pool_capa); params.pkt.num = PKT_BUF_NUM; params.type = ODP_POOL_PACKET;
commit ba23aa731a85709a84ea0137a918b07cf4811fc2 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:42 2016 +0200
validation: crypto: honour pool capability limits
Reduced oversized packet length and segment length requirements from 32 kB to 1 kB (tens of bytes are actually used). Also check that lengths are not larger than pool capabilities for those.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/crypto/crypto.c b/test/common_plat/validation/api/crypto/crypto.c index 9c9a00d..2089016 100644 --- a/test/common_plat/validation/api/crypto/crypto.c +++ b/test/common_plat/validation/api/crypto/crypto.c @@ -9,11 +9,8 @@ #include "odp_crypto_test_inp.h" #include "crypto.h"
-#define SHM_PKT_POOL_SIZE (512 * 2048 * 2) -#define SHM_PKT_POOL_BUF_SIZE (1024 * 32) - -#define SHM_COMPL_POOL_SIZE (128 * 1024) -#define SHM_COMPL_POOL_BUF_SIZE 128 +#define PKT_POOL_NUM 64 +#define PKT_POOL_LEN (1 * 1024)
odp_suiteinfo_t crypto_suites[] = { {ODP_CRYPTO_SYNC_INP, crypto_suite_sync_init, NULL, crypto_suite}, @@ -44,13 +41,20 @@ int crypto_init(odp_instance_t *inst) }
odp_pool_param_init(¶ms); - params.pkt.seg_len = SHM_PKT_POOL_BUF_SIZE; - params.pkt.len = SHM_PKT_POOL_BUF_SIZE; - params.pkt.num = SHM_PKT_POOL_SIZE / SHM_PKT_POOL_BUF_SIZE; + params.pkt.seg_len = PKT_POOL_LEN; + params.pkt.len = PKT_POOL_LEN; + params.pkt.num = PKT_POOL_NUM; params.type = ODP_POOL_PACKET;
- if (SHM_PKT_POOL_BUF_SIZE > pool_capa.pkt.max_len) - params.pkt.len = pool_capa.pkt.max_len; + if (PKT_POOL_LEN > pool_capa.pkt.max_seg_len) { + fprintf(stderr, "Warning: small packet segment length\n"); + params.pkt.seg_len = pool_capa.pkt.max_seg_len; + } + + if (PKT_POOL_LEN > pool_capa.pkt.max_len) { + fprintf(stderr, "Pool max packet length too small\n"); + return -1; + }
pool = odp_pool_create("packet_pool", ¶ms);
commit 4b7e6b82f14ef2d09b91b1c197e84dbfe0e8a09b Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:41 2016 +0200
linux-gen: socket: use trunc instead of pull tail
This is a bug correction for multi-segment packet handling. Packet pull tail cannot decrement packet length more than there are data in the last segment. Trunc tail must be used instead.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/pktio/socket.c b/platform/linux-generic/pktio/socket.c index 9fe4a7e..7d23968 100644 --- a/platform/linux-generic/pktio/socket.c +++ b/platform/linux-generic/pktio/socket.c @@ -674,6 +674,7 @@ static int sock_mmsg_recv(pktio_entry_t *pktio_entry, int index ODP_UNUSED, if (cls_classify_packet(pktio_entry, base, pkt_len, pkt_len, &pool, &parsed_hdr)) continue; + num = packet_alloc_multi(pool, pkt_len, &pkt, 1); if (num != 1) continue; @@ -700,6 +701,7 @@ static int sock_mmsg_recv(pktio_entry_t *pktio_entry, int index ODP_UNUSED,
num = packet_alloc_multi(pkt_sock->pool, pkt_sock->mtu, &pkt_table[i], 1); + if (odp_unlikely(num != 1)) { pkt_table[i] = ODP_PACKET_INVALID; break; @@ -724,23 +726,34 @@ static int sock_mmsg_recv(pktio_entry_t *pktio_entry, int index ODP_UNUSED, void *base = msgvec[i].msg_hdr.msg_iov->iov_base; struct ethhdr *eth_hdr = base; odp_packet_hdr_t *pkt_hdr; + odp_packet_t pkt; + int ret; + + pkt = pkt_table[i];
/* Don't receive packets sent by ourselves */ if (odp_unlikely(ethaddrs_equal(pkt_sock->if_mac, eth_hdr->h_source))) { - odp_packet_free(pkt_table[i]); + odp_packet_free(pkt); continue; } - pkt_hdr = odp_packet_hdr(pkt_table[i]); + /* Parse and set packet header data */ - odp_packet_pull_tail(pkt_table[i], - odp_packet_len(pkt_table[i]) - - msgvec[i].msg_len); + ret = odp_packet_trunc_tail(&pkt, odp_packet_len(pkt) - + msgvec[i].msg_len, + NULL, NULL); + if (ret < 0) { + ODP_ERR("trunk_tail failed"); + odp_packet_free(pkt); + continue; + } + + pkt_hdr = odp_packet_hdr(pkt); packet_parse_l2(&pkt_hdr->p, pkt_hdr->frame_len); packet_set_ts(pkt_hdr, ts); pkt_hdr->input = pktio_entry->s.handle;
- pkt_table[nb_rx] = pkt_table[i]; + pkt_table[nb_rx] = pkt; nb_rx++; }
commit 683a975963d150e5dae12649cbd1abc003ee2c39 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:40 2016 +0200
linux-gen: packet: remove zero len support from alloc
Remove support for zero length allocations which were never required by the API specification or tested by the validation suite.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index a5c6ff4..0d3fd05 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -478,7 +478,6 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) pool_t *pool = pool_entry_from_hdl(pool_hdl); odp_packet_t pkt; int num, num_seg; - int zero_len = 0;
if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) { __odp_errno = EINVAL; @@ -488,23 +487,12 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) if (odp_unlikely(len > pool->max_len)) return ODP_PACKET_INVALID;
- if (odp_unlikely(len == 0)) { - len = pool->data_size; - zero_len = 1; - } - num_seg = num_segments(len); num = packet_alloc(pool, len, 1, num_seg, &pkt, 0);
if (odp_unlikely(num == 0)) return ODP_PACKET_INVALID;
- if (odp_unlikely(zero_len)) { - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); - - pull_tail(pkt_hdr, len); - } - return pkt; }
@@ -513,7 +501,6 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, { pool_t *pool = pool_entry_from_hdl(pool_hdl); int num, num_seg; - int zero_len = 0;
if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) { __odp_errno = EINVAL; @@ -523,24 +510,9 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, if (odp_unlikely(len > pool->max_len)) return -1;
- if (odp_unlikely(len == 0)) { - len = pool->data_size; - zero_len = 1; - } - num_seg = num_segments(len); num = packet_alloc(pool, len, max_num, num_seg, pkt, 0);
- if (odp_unlikely(zero_len)) { - int i; - - for (i = 0; i < num; i++) { - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt[i]); - - pull_tail(pkt_hdr, len); - } - } - return num; }
commit dcffa7faecf3bf4a66e6f5ce745bcde3a4f0b7ed Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:39 2016 +0200
api: packet: added limits for packet len on alloc
There's no use case for application to allocate zero length packets. Application should always have some knowledge about the new packet data length before allocation. Also implementations are more efficient when a check for zero length is avoided.
Also added a pool parameter to specify the maximum packet length to be allocated from the pool. Implementations may use this information to optimize e.g. memory usage, etc. Application must not exceed the max_len parameter value on alloc calls. Pool capabilities define already max_len.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/packet.h b/include/odp/api/spec/packet.h index 4a14f2d..faf62e2 100644 --- a/include/odp/api/spec/packet.h +++ b/include/odp/api/spec/packet.h @@ -82,13 +82,14 @@ extern "C" { * Allocate a packet from a packet pool * * Allocates a packet of the requested length from the specified packet pool. - * Pool must have been created with ODP_POOL_PACKET type. The + * The pool must have been created with ODP_POOL_PACKET type. The * packet is initialized with data pointers and lengths set according to the * specified len, and the default headroom and tailroom length settings. All - * other packet metadata are set to their default values. + * other packet metadata are set to their default values. Packet length must + * be greater than zero and not exceed packet pool parameter 'max_len' value. * * @param pool Pool handle - * @param len Packet data length + * @param len Packet data length (1 ... pool max_len) * * @return Handle of allocated packet * @retval ODP_PACKET_INVALID Packet could not be allocated @@ -105,7 +106,7 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool, uint32_t len); * packets from a pool. * * @param pool Pool handle - * @param len Packet data length + * @param len Packet data length (1 ... pool max_len) * @param[out] pkt Array of packet handles for output * @param num Maximum number of packets to allocate * diff --git a/include/odp/api/spec/pool.h b/include/odp/api/spec/pool.h index a1331e3..041f4af 100644 --- a/include/odp/api/spec/pool.h +++ b/include/odp/api/spec/pool.h @@ -192,6 +192,12 @@ typedef struct odp_pool_param_t { pkt.max_len. Use 0 for default. */ uint32_t len;
+ /** Maximum packet length that will be allocated from + the pool. The maximum value is defined by pool + capability pkt.max_len. Use 0 for default (the + pool maximum). */ + uint32_t max_len; + /** Minimum number of packet data bytes that are stored in the first segment of a packet. The maximum value is defined by pool capability pkt.max_seg_len.
commit c247659448bfd45c1bb648d7508f7db0b225b7b8 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:38 2016 +0200
test: validation: packet: improved multi-segment alloc test
Added test cases to allocate and free multiple multi-segment packets.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/packet/packet.c b/test/common_plat/validation/api/packet/packet.c index b082add..3ad00ed 100644 --- a/test/common_plat/validation/api/packet/packet.c +++ b/test/common_plat/validation/api/packet/packet.c @@ -295,23 +295,86 @@ void packet_test_alloc_free_multi(void)
void packet_test_alloc_segmented(void) { + const int num = 5; + odp_packet_t pkts[num]; odp_packet_t pkt; - uint32_t len; + uint32_t max_len; + odp_pool_t pool; + odp_pool_param_t params; odp_pool_capability_t capa; + int ret, i, num_alloc;
CU_ASSERT_FATAL(odp_pool_capability(&capa) == 0);
if (capa.pkt.max_len) - len = capa.pkt.max_len; + max_len = capa.pkt.max_len; else - len = capa.pkt.min_seg_len * capa.pkt.max_segs_per_pkt; + max_len = capa.pkt.min_seg_len * capa.pkt.max_segs_per_pkt; + + odp_pool_param_init(¶ms); + + params.type = ODP_POOL_PACKET; + params.pkt.seg_len = capa.pkt.min_seg_len; + params.pkt.len = max_len; + + /* Ensure that 'num' segmented packets can be allocated */ + params.pkt.num = num * capa.pkt.max_segs_per_pkt; + + pool = odp_pool_create("pool_alloc_segmented", ¶ms); + CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); + + /* Less than max len allocs */ + pkt = odp_packet_alloc(pool, max_len / 2); + CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); + CU_ASSERT(odp_packet_len(pkt) == max_len / 2); + + odp_packet_free(pkt); + + num_alloc = 0; + for (i = 0; i < num; i++) { + ret = odp_packet_alloc_multi(pool, max_len / 2, + &pkts[num_alloc], num - num_alloc); + CU_ASSERT_FATAL(ret >= 0); + num_alloc += ret; + if (num_alloc >= num) + break; + } + + CU_ASSERT(num_alloc == num); + + for (i = 0; i < num_alloc; i++) + CU_ASSERT(odp_packet_len(pkts[i]) == max_len / 2);
- pkt = odp_packet_alloc(packet_pool, len); + odp_packet_free_multi(pkts, num_alloc); + + /* Max len allocs */ + pkt = odp_packet_alloc(pool, max_len); CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); - CU_ASSERT(odp_packet_len(pkt) == len); + CU_ASSERT(odp_packet_len(pkt) == max_len); + if (segmentation_supported) CU_ASSERT(odp_packet_is_segmented(pkt) == 1); + odp_packet_free(pkt); + + num_alloc = 0; + for (i = 0; i < num; i++) { + ret = odp_packet_alloc_multi(pool, max_len, + &pkts[num_alloc], num - num_alloc); + CU_ASSERT_FATAL(ret >= 0); + num_alloc += ret; + if (num_alloc >= num) + break; + } + + CU_ASSERT(num_alloc == num); + + for (i = 0; i < num_alloc; i++) + CU_ASSERT(odp_packet_len(pkts[i]) == max_len); + + odp_packet_free_multi(pkts, num_alloc); + + CU_ASSERT(odp_pool_destroy(pool) == 0); }
void packet_test_event_conversion(void)
commit 95918aa496a22794af654c582830e2a2d8b914a7 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:37 2016 +0200
linux-gen: packet: added support for segmented packets
Added support for multi-segmented packets. The first segments is the packet descriptor, which contains all metadata and pointers to other segments.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp/api/plat/packet_types.h b/platform/linux-generic/include/odp/api/plat/packet_types.h index b5345ed..864494d 100644 --- a/platform/linux-generic/include/odp/api/plat/packet_types.h +++ b/platform/linux-generic/include/odp/api/plat/packet_types.h @@ -32,9 +32,11 @@ typedef ODP_HANDLE_T(odp_packet_t);
#define ODP_PACKET_OFFSET_INVALID (0x0fffffff)
-typedef ODP_HANDLE_T(odp_packet_seg_t); +/* A packet segment handle stores a small index. Strong type handles are + * pointers, which would be wasteful in this case. */ +typedef uint8_t odp_packet_seg_t;
-#define ODP_PACKET_SEG_INVALID _odp_cast_scalar(odp_packet_seg_t, 0xffffffff) +#define ODP_PACKET_SEG_INVALID ((odp_packet_seg_t)-1)
/** odp_packet_color_t assigns names to the various pkt "colors" */ typedef enum { diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h index f8688f6..cf817d9 100644 --- a/platform/linux-generic/include/odp_buffer_inlines.h +++ b/platform/linux-generic/include/odp_buffer_inlines.h @@ -23,22 +23,11 @@ odp_event_type_t _odp_buffer_event_type(odp_buffer_t buf); void _odp_buffer_event_type_set(odp_buffer_t buf, int ev); int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf);
-void *buffer_map(odp_buffer_hdr_t *buf, uint32_t offset, uint32_t *seglen, - uint32_t limit); - static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) { return hdr->handle.handle; }
-static inline uint32_t pool_id_from_buf(odp_buffer_t buf) -{ - odp_buffer_bits_t handle; - - handle.handle = buf; - return handle.pool_id; -} - #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 0ca13f8..4e75908 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -33,10 +33,6 @@ extern "C" { #include <odp_schedule_if.h> #include <stddef.h>
-ODP_STATIC_ASSERT(ODP_CONFIG_PACKET_SEG_LEN_MIN >= 256, - "ODP Segment size must be a minimum of 256 bytes"); - - typedef union odp_buffer_bits_t { odp_buffer_t handle;
@@ -65,6 +61,20 @@ struct odp_buffer_hdr_t { int burst_first; struct odp_buffer_hdr_t *burst[BUFFER_BURST_SIZE];
+ struct { + void *hdr; + uint8_t *data; + uint32_t len; + } seg[CONFIG_PACKET_MAX_SEGS]; + + /* max data size */ + uint32_t size; + + /* Initial buffer data pointer and length */ + void *base_data; + uint32_t base_len; + uint8_t *buf_end; + union { uint32_t all; struct { @@ -75,7 +85,6 @@ struct odp_buffer_hdr_t {
int8_t type; /* buffer type */ odp_event_type_t event_type; /* for reuse as event */ - uint32_t size; /* max data size */ odp_pool_t pool_hdl; /* buffer pool handle */ union { uint64_t buf_u64; /* user u64 */ @@ -86,8 +95,6 @@ struct odp_buffer_hdr_t { uint32_t uarea_size; /* size of user area */ uint32_t segcount; /* segment count */ uint32_t segsize; /* segment size */ - /* block addrs */ - void *addr[ODP_CONFIG_PACKET_MAX_SEGS]; uint64_t order; /* sequence for ordered queues */ queue_entry_t *origin_qe; /* ordered queue origin */ union { @@ -105,8 +112,6 @@ struct odp_buffer_hdr_t { };
/* Forward declarations */ -int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount); -void seg_free_head(odp_buffer_hdr_t *buf_hdr, int segcount); int seg_alloc_tail(odp_buffer_hdr_t *buf_hdr, int segcount); void seg_free_tail(odp_buffer_hdr_t *buf_hdr, int segcount);
diff --git a/platform/linux-generic/include/odp_config_internal.h b/platform/linux-generic/include/odp_config_internal.h index 3fd1c93..ee51c7f 100644 --- a/platform/linux-generic/include/odp_config_internal.h +++ b/platform/linux-generic/include/odp_config_internal.h @@ -54,7 +54,7 @@ extern "C" { * The default value (66) allows a 1500-byte packet to be received into a single * segment with Ethernet offset alignment and room for some header expansion. */ -#define ODP_CONFIG_PACKET_HEADROOM 66 +#define CONFIG_PACKET_HEADROOM 66
/* * Default packet tailroom @@ -65,21 +65,26 @@ extern "C" { * without restriction. Note that most implementations will automatically * consider any unused portion of the last segment of a packet as tailroom */ -#define ODP_CONFIG_PACKET_TAILROOM 0 +#define CONFIG_PACKET_TAILROOM 0
/* * Maximum number of segments per packet */ -#define ODP_CONFIG_PACKET_MAX_SEGS 1 +#define CONFIG_PACKET_MAX_SEGS 1
/* - * Maximum packet segment length - * - * This defines the maximum packet segment buffer length in bytes. The user - * defined segment length (seg_len in odp_pool_param_t) must not be larger than - * this. + * Maximum packet segment size including head- and tailrooms */ -#define ODP_CONFIG_PACKET_SEG_LEN_MAX (64 * 1024) +#define CONFIG_PACKET_SEG_SIZE (64 * 1024) + +/* Maximum data length in a segment + * + * The user defined segment length (seg_len in odp_pool_param_t) must not + * be larger than this. +*/ +#define CONFIG_PACKET_MAX_SEG_LEN (CONFIG_PACKET_SEG_SIZE - \ + CONFIG_PACKET_HEADROOM - \ + CONFIG_PACKET_TAILROOM)
/* * Minimum packet segment length @@ -88,21 +93,7 @@ extern "C" { * defined segment length (seg_len in odp_pool_param_t) will be rounded up into * this value. */ -#define ODP_CONFIG_PACKET_SEG_LEN_MIN ODP_CONFIG_PACKET_SEG_LEN_MAX - -/* - * Maximum packet buffer length - * - * This defines the maximum number of bytes that can be stored into a packet - * (maximum return value of odp_packet_buf_len(void)). Attempts to allocate - * (including default head- and tailrooms) or extend packets to sizes larger - * than this limit will fail. - * - * @internal In odp-linux implementation: - * - The value MUST be an integral number of segments - * - The value SHOULD be large enough to accommodate jumbo packets (9K) - */ -#define ODP_CONFIG_PACKET_BUF_LEN_MAX ODP_CONFIG_PACKET_SEG_LEN_MAX +#define CONFIG_PACKET_SEG_LEN_MIN CONFIG_PACKET_MAX_SEG_LEN
/* Maximum number of shared memory blocks. * diff --git a/platform/linux-generic/include/odp_packet_internal.h b/platform/linux-generic/include/odp_packet_internal.h index 0cdd5ca..d09231e 100644 --- a/platform/linux-generic/include/odp_packet_internal.h +++ b/platform/linux-generic/include/odp_packet_internal.h @@ -27,8 +27,6 @@ extern "C" { #include <odp/api/crypto.h> #include <odp_crypto_internal.h>
-#define PACKET_JUMBO_LEN (9 * 1024) - /** Minimum segment length expected by packet_parse_common() */ #define PACKET_PARSE_SEG_LEN 96
@@ -218,85 +216,13 @@ static inline void copy_packet_cls_metadata(odp_packet_hdr_t *src_hdr, dst_hdr->op_result = src_hdr->op_result; }
-static inline void *packet_map(odp_packet_hdr_t *pkt_hdr, - uint32_t offset, uint32_t *seglen) -{ - if (offset > pkt_hdr->frame_len) - return NULL; - - return buffer_map(&pkt_hdr->buf_hdr, - pkt_hdr->headroom + offset, seglen, - pkt_hdr->headroom + pkt_hdr->frame_len); -} - -static inline void push_head(odp_packet_hdr_t *pkt_hdr, size_t len) -{ - pkt_hdr->headroom -= len; - pkt_hdr->frame_len += len; -} - -static inline void pull_head(odp_packet_hdr_t *pkt_hdr, size_t len) -{ - pkt_hdr->headroom += len; - pkt_hdr->frame_len -= len; -} - -static inline int push_head_seg(odp_packet_hdr_t *pkt_hdr, size_t len) -{ - uint32_t extrasegs = - (len - pkt_hdr->headroom + pkt_hdr->buf_hdr.segsize - 1) / - pkt_hdr->buf_hdr.segsize; - - if (pkt_hdr->buf_hdr.segcount + extrasegs > - ODP_CONFIG_PACKET_MAX_SEGS || - seg_alloc_head(&pkt_hdr->buf_hdr, extrasegs)) - return -1; - - pkt_hdr->headroom += extrasegs * pkt_hdr->buf_hdr.segsize; - return 0; -} - -static inline void pull_head_seg(odp_packet_hdr_t *pkt_hdr) -{ - uint32_t extrasegs = (pkt_hdr->headroom - 1) / pkt_hdr->buf_hdr.segsize; - - seg_free_head(&pkt_hdr->buf_hdr, extrasegs); - pkt_hdr->headroom -= extrasegs * pkt_hdr->buf_hdr.segsize; -} - -static inline void push_tail(odp_packet_hdr_t *pkt_hdr, size_t len) -{ - pkt_hdr->tailroom -= len; - pkt_hdr->frame_len += len; -} - -static inline int push_tail_seg(odp_packet_hdr_t *pkt_hdr, size_t len) +static inline void pull_tail(odp_packet_hdr_t *pkt_hdr, uint32_t len) { - uint32_t extrasegs = - (len - pkt_hdr->tailroom + pkt_hdr->buf_hdr.segsize - 1) / - pkt_hdr->buf_hdr.segsize; + int last = pkt_hdr->buf_hdr.segcount - 1;
- if (pkt_hdr->buf_hdr.segcount + extrasegs > - ODP_CONFIG_PACKET_MAX_SEGS || - seg_alloc_tail(&pkt_hdr->buf_hdr, extrasegs)) - return -1; - - pkt_hdr->tailroom += extrasegs * pkt_hdr->buf_hdr.segsize; - return 0; -} - -static inline void pull_tail_seg(odp_packet_hdr_t *pkt_hdr) -{ - uint32_t extrasegs = pkt_hdr->tailroom / pkt_hdr->buf_hdr.segsize; - - seg_free_tail(&pkt_hdr->buf_hdr, extrasegs); - pkt_hdr->tailroom -= extrasegs * pkt_hdr->buf_hdr.segsize; -} - -static inline void pull_tail(odp_packet_hdr_t *pkt_hdr, size_t len) -{ pkt_hdr->tailroom += len; pkt_hdr->frame_len -= len; + pkt_hdr->buf_hdr.seg[last].len -= len; }
static inline uint32_t packet_len(odp_packet_hdr_t *pkt_hdr) diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index f7e951a..5d7b817 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -113,9 +113,6 @@ int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], odp_buffer_hdr_t *buf_hdr[], int num); void buffer_free_multi(const odp_buffer_t buf[], int num_free);
-uint32_t pool_headroom(odp_pool_t pool); -uint32_t pool_tailroom(odp_pool_t pool); - #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/odp_buffer.c b/platform/linux-generic/odp_buffer.c index eed15c0..b791039 100644 --- a/platform/linux-generic/odp_buffer.c +++ b/platform/linux-generic/odp_buffer.c @@ -28,7 +28,7 @@ void *odp_buffer_addr(odp_buffer_t buf) { odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf);
- return hdr->addr[0]; + return hdr->seg[0].data; }
uint32_t odp_buffer_size(odp_buffer_t buf) @@ -56,11 +56,11 @@ int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf) " pool %" PRIu64 "\n", odp_pool_to_u64(hdr->pool_hdl)); len += snprintf(&str[len], n-len, - " addr %p\n", hdr->addr); + " addr %p\n", hdr->seg[0].data); len += snprintf(&str[len], n-len, - " size %" PRIu32 "\n", hdr->size); + " size %" PRIu32 "\n", hdr->size); len += snprintf(&str[len], n-len, - " type %i\n", hdr->type); + " type %i\n", hdr->type);
return len; } diff --git a/platform/linux-generic/odp_crypto.c b/platform/linux-generic/odp_crypto.c index 3ebabb7..7e686ff 100644 --- a/platform/linux-generic/odp_crypto.c +++ b/platform/linux-generic/odp_crypto.c @@ -754,9 +754,13 @@ odp_crypto_operation(odp_crypto_op_params_t *params, ODP_POOL_INVALID != session->output_pool) params->out_pkt = odp_packet_alloc(session->output_pool, odp_packet_len(params->pkt)); + + if (odp_unlikely(ODP_PACKET_INVALID == params->out_pkt)) { + ODP_DBG("Alloc failed.\n"); + return -1; + } + if (params->pkt != params->out_pkt) { - if (odp_unlikely(ODP_PACKET_INVALID == params->out_pkt)) - ODP_ABORT(); (void)odp_packet_copy_from_pkt(params->out_pkt, 0, params->pkt, diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 2eee775..a5c6ff4 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -20,12 +20,155 @@ #include <stdio.h> #include <inttypes.h>
-/* - * - * Alloc and free - * ******************************************************** - * - */ +static inline odp_packet_t packet_handle(odp_packet_hdr_t *pkt_hdr) +{ + return (odp_packet_t)pkt_hdr->buf_hdr.handle.handle; +} + +static inline odp_buffer_t buffer_handle(odp_packet_hdr_t *pkt_hdr) +{ + return pkt_hdr->buf_hdr.handle.handle; +} + +static inline uint32_t packet_seg_len(odp_packet_hdr_t *pkt_hdr, + uint32_t seg_idx) +{ + return pkt_hdr->buf_hdr.seg[seg_idx].len; +} + +static inline void *packet_seg_data(odp_packet_hdr_t *pkt_hdr, uint32_t seg_idx) +{ + return pkt_hdr->buf_hdr.seg[seg_idx].data; +} + +static inline int packet_last_seg(odp_packet_hdr_t *pkt_hdr) +{ + if (CONFIG_PACKET_MAX_SEGS == 1) + return 0; + else + return pkt_hdr->buf_hdr.segcount - 1; +} + +static inline uint32_t packet_first_seg_len(odp_packet_hdr_t *pkt_hdr) +{ + return packet_seg_len(pkt_hdr, 0); +} + +static inline uint32_t packet_last_seg_len(odp_packet_hdr_t *pkt_hdr) +{ + int last = packet_last_seg(pkt_hdr); + + return packet_seg_len(pkt_hdr, last); +} + +static inline void *packet_data(odp_packet_hdr_t *pkt_hdr) +{ + return pkt_hdr->buf_hdr.seg[0].data; +} + +static inline void *packet_tail(odp_packet_hdr_t *pkt_hdr) +{ + int last = packet_last_seg(pkt_hdr); + uint32_t seg_len = pkt_hdr->buf_hdr.seg[last].len; + + return pkt_hdr->buf_hdr.seg[last].data + seg_len; +} + +static inline void push_head(odp_packet_hdr_t *pkt_hdr, uint32_t len) +{ + pkt_hdr->headroom -= len; + pkt_hdr->frame_len += len; + pkt_hdr->buf_hdr.seg[0].data -= len; + pkt_hdr->buf_hdr.seg[0].len += len; +} + +static inline void pull_head(odp_packet_hdr_t *pkt_hdr, uint32_t len) +{ + pkt_hdr->headroom += len; + pkt_hdr->frame_len -= len; + pkt_hdr->buf_hdr.seg[0].data += len; + pkt_hdr->buf_hdr.seg[0].len -= len; +} + +static inline void push_tail(odp_packet_hdr_t *pkt_hdr, uint32_t len) +{ + int last = packet_last_seg(pkt_hdr); + + pkt_hdr->tailroom -= len; + pkt_hdr->frame_len += len; + pkt_hdr->buf_hdr.seg[last].len += len; +} + +/* Copy all metadata for segmentation modification. Segment data and lengths + * are not copied. */ +static inline void packet_seg_copy_md(odp_packet_hdr_t *dst, + odp_packet_hdr_t *src) +{ + dst->p = src->p; + + /* lengths are not copied: + * .frame_len + * .headroom + * .tailroom + */ + + dst->input = src->input; + dst->dst_queue = src->dst_queue; + dst->flow_hash = src->flow_hash; + dst->timestamp = src->timestamp; + dst->op_result = src->op_result; + + /* buffer header side packet metadata */ + dst->buf_hdr.buf_u64 = src->buf_hdr.buf_u64; + dst->buf_hdr.uarea_addr = src->buf_hdr.uarea_addr; + dst->buf_hdr.uarea_size = src->buf_hdr.uarea_size; + + /* segmentation data is not copied: + * buf_hdr.seg[] + * buf_hdr.segcount + */ +} + +static inline void *packet_map(odp_packet_hdr_t *pkt_hdr, + uint32_t offset, uint32_t *seg_len, int *seg_idx) +{ + void *addr; + uint32_t len; + int seg = 0; + int seg_count = pkt_hdr->buf_hdr.segcount; + + if (odp_unlikely(offset >= pkt_hdr->frame_len)) + return NULL; + + if (odp_likely(CONFIG_PACKET_MAX_SEGS == 1 || seg_count == 1)) { + addr = pkt_hdr->buf_hdr.seg[0].data + offset; + len = pkt_hdr->buf_hdr.seg[0].len - offset; + } else { + int i; + uint32_t seg_start = 0, seg_end = 0; + + for (i = 0; i < seg_count; i++) { + seg_end += pkt_hdr->buf_hdr.seg[i].len; + + if (odp_likely(offset < seg_end)) + break; + + seg_start = seg_end; + } + + addr = pkt_hdr->buf_hdr.seg[i].data + (offset - seg_start); + len = pkt_hdr->buf_hdr.seg[i].len - (offset - seg_start); + seg = i; + } + + if (seg_len) + *seg_len = len; + + if (seg_idx) + *seg_idx = seg; + + return addr; +}
static inline void packet_parse_disable(odp_packet_hdr_t *pkt_hdr) { @@ -48,11 +191,23 @@ void packet_parse_reset(odp_packet_hdr_t *pkt_hdr) /** * Initialize packet */ -static void packet_init(pool_t *pool, odp_packet_hdr_t *pkt_hdr, - size_t size, int parse) +static inline void packet_init(odp_packet_hdr_t *pkt_hdr, uint32_t len, + int parse) { - pkt_hdr->p.parsed_layers = LAYER_NONE; + uint32_t seg_len; + int num = pkt_hdr->buf_hdr.segcount; + + if (odp_likely(CONFIG_PACKET_MAX_SEGS == 1 || num == 1)) { + seg_len = len; + pkt_hdr->buf_hdr.seg[0].len = len; + } else { + seg_len = len - ((num - 1) * CONFIG_PACKET_MAX_SEG_LEN); + + /* Last segment data length */ + pkt_hdr->buf_hdr.seg[num - 1].len = seg_len; + }
+ pkt_hdr->p.parsed_layers = LAYER_NONE; pkt_hdr->p.input_flags.all = 0; pkt_hdr->p.output_flags.all = 0; pkt_hdr->p.error_flags.all = 0; @@ -70,42 +225,260 @@ static void packet_init(pool_t *pool, odp_packet_hdr_t *pkt_hdr, * Packet tailroom is rounded up to fill the last * segment occupied by the allocated length. */ - pkt_hdr->frame_len = size; - pkt_hdr->headroom = pool->headroom; - pkt_hdr->tailroom = pool->data_size - size + pool->tailroom; + pkt_hdr->frame_len = len; + pkt_hdr->headroom = CONFIG_PACKET_HEADROOM; + pkt_hdr->tailroom = CONFIG_PACKET_MAX_SEG_LEN - seg_len + + CONFIG_PACKET_TAILROOM;
pkt_hdr->input = ODP_PKTIO_INVALID; }
-int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, - odp_packet_t pkt[], int max_num) +static inline odp_packet_hdr_t *init_segments(odp_buffer_t buf[], int num) { - pool_t *pool = pool_entry_from_hdl(pool_hdl); - int num, i; - odp_packet_hdr_t *pkt_hdrs[max_num]; + odp_packet_hdr_t *pkt_hdr; + int i; + + /* First buffer is the packet descriptor */ + pkt_hdr = odp_packet_hdr((odp_packet_t)buf[0]); + + pkt_hdr->buf_hdr.seg[0].data = pkt_hdr->buf_hdr.base_data; + pkt_hdr->buf_hdr.seg[0].len = pkt_hdr->buf_hdr.base_len; + + /* Link segments */ + if (odp_unlikely(CONFIG_PACKET_MAX_SEGS != 1)) { + pkt_hdr->buf_hdr.segcount = num; + + if (odp_unlikely(num > 1)) { + for (i = 1; i < num; i++) { + odp_packet_hdr_t *hdr; + odp_buffer_hdr_t *b_hdr; + + hdr = odp_packet_hdr((odp_packet_t)buf[i]); + b_hdr = &hdr->buf_hdr; + + pkt_hdr->buf_hdr.seg[i].hdr = hdr; + pkt_hdr->buf_hdr.seg[i].data = b_hdr->base_data; + pkt_hdr->buf_hdr.seg[i].len = b_hdr->base_len; + } + } + } + + return pkt_hdr; +} + +/* Calculate the number of segments */ +static inline int num_segments(uint32_t len) +{ + uint32_t max_seg_len; + int num;
- num = buffer_alloc_multi(pool, (odp_buffer_t *)pkt, - (odp_buffer_hdr_t **)pkt_hdrs, max_num); + if (CONFIG_PACKET_MAX_SEGS == 1) + return 1; + + num = 1; + max_seg_len = CONFIG_PACKET_MAX_SEG_LEN; + + if (odp_unlikely(len > max_seg_len)) { + num = len / max_seg_len; + + if (odp_likely((num * max_seg_len) != len)) + num += 1; + } + + return num; +} + +static inline void copy_all_segs(odp_packet_hdr_t *to, odp_packet_hdr_t *from) +{ + int i; + int n = to->buf_hdr.segcount; + int num = from->buf_hdr.segcount;
for (i = 0; i < num; i++) { - odp_packet_hdr_t *pkt_hdr = pkt_hdrs[i]; + to->buf_hdr.seg[n + i].hdr = from->buf_hdr.seg[i].hdr; + to->buf_hdr.seg[n + i].data = from->buf_hdr.seg[i].data; + to->buf_hdr.seg[n + i].len = from->buf_hdr.seg[i].len; + } + + to->buf_hdr.segcount = n + num; +}
- packet_init(pool, pkt_hdr, len, 1 /* do parse */); +static inline void copy_num_segs(odp_packet_hdr_t *to, odp_packet_hdr_t *from, + int num) +{ + int i;
- if (pkt_hdr->tailroom >= pkt_hdr->buf_hdr.segsize) - pull_tail_seg(pkt_hdr); + for (i = 0; i < num; i++) { + to->buf_hdr.seg[i].hdr = from->buf_hdr.seg[num + i].hdr; + to->buf_hdr.seg[i].data = from->buf_hdr.seg[num + i].data; + to->buf_hdr.seg[i].len = from->buf_hdr.seg[num + i].len; }
+ to->buf_hdr.segcount = num; +} + +static inline odp_packet_hdr_t *add_segments(odp_packet_hdr_t *pkt_hdr, + uint32_t len, int head) +{ + pool_t *pool = pool_entry_from_hdl(pkt_hdr->buf_hdr.pool_hdl); + odp_packet_hdr_t *new_hdr; + int num, ret; + uint32_t seg_len, offset; + + num = num_segments(len); + + if ((pkt_hdr->buf_hdr.segcount + num) > CONFIG_PACKET_MAX_SEGS) + return NULL; + + { + odp_buffer_t buf[num]; + + ret = buffer_alloc_multi(pool, buf, NULL, num); + if (odp_unlikely(ret != num)) { + if (ret > 0) + buffer_free_multi(buf, ret); + + return NULL; + } + + new_hdr = init_segments(buf, num); + } + + seg_len = len - ((num - 1) * pool->max_seg_len); + offset = pool->max_seg_len - seg_len; + + if (head) { + /* add into the head*/ + copy_all_segs(new_hdr, pkt_hdr); + + /* adjust first segment length */ + new_hdr->buf_hdr.seg[0].data += offset; + new_hdr->buf_hdr.seg[0].len = seg_len; + + packet_seg_copy_md(new_hdr, pkt_hdr); + new_hdr->frame_len = pkt_hdr->frame_len + len; + new_hdr->headroom = pool->headroom + offset; + new_hdr->tailroom = pkt_hdr->tailroom; + + pkt_hdr = new_hdr; + } else { + int last; + + /* add into the tail */ + copy_all_segs(pkt_hdr, new_hdr); + + /* adjust last segment length */ + last = packet_last_seg(pkt_hdr); + pkt_hdr->buf_hdr.seg[last].len = seg_len; + + pkt_hdr->frame_len += len; + pkt_hdr->tailroom = pool->tailroom + offset; + } + + return pkt_hdr; +} + +static inline odp_packet_hdr_t *free_segments(odp_packet_hdr_t *pkt_hdr, + int num, uint32_t free_len, + uint32_t pull_len, int head) +{ + int i; + odp_packet_hdr_t *new_hdr; + odp_buffer_t buf[num]; + int n = pkt_hdr->buf_hdr.segcount - num; + + if (head) { + for (i = 0; i < num; i++) + buf[i] = buffer_handle(pkt_hdr->buf_hdr.seg[i].hdr); + + /* First remaining segment is the new packet descriptor */ + new_hdr = pkt_hdr->buf_hdr.seg[num].hdr; + copy_num_segs(new_hdr, pkt_hdr, n); + packet_seg_copy_md(new_hdr, pkt_hdr); + + /* Tailroom not changed */ + new_hdr->tailroom = pkt_hdr->tailroom; + /* No headroom in non-first segments */ + new_hdr->headroom = 0; + new_hdr->frame_len = pkt_hdr->frame_len - free_len; + + pull_head(new_hdr, pull_len); + + pkt_hdr = new_hdr; + } else { + for (i = 0; i < num; i++) + buf[i] = buffer_handle(pkt_hdr->buf_hdr.seg[n + i].hdr); + + /* Head segment remains, no need to copy or update majority + * of the metadata. */ + pkt_hdr->buf_hdr.segcount = n; + pkt_hdr->frame_len -= free_len; + pkt_hdr->tailroom = pkt_hdr->buf_hdr.buf_end - + (uint8_t *)packet_tail(pkt_hdr); + + pull_tail(pkt_hdr, pull_len); + } + + buffer_free_multi(buf, num); + + return pkt_hdr; +} + +static inline int packet_alloc(pool_t *pool, uint32_t len, int max_pkt, + int num_seg, odp_packet_t *pkt, int parse) +{ + int num_buf, i; + int num = max_pkt; + int max_buf = max_pkt * num_seg; + odp_buffer_t buf[max_buf]; + + num_buf = buffer_alloc_multi(pool, buf, NULL, max_buf); + + /* Failed to allocate all segments */ + if (odp_unlikely(num_buf != max_buf)) { + int num_free; + + num = num_buf / num_seg; + num_free = num_buf - (num * num_seg); + + if (num_free > 0) + buffer_free_multi(&buf[num_buf - num_free], num_free); + + if (num == 0) + return 0; + } + + for (i = 0; i < num; i++) { + odp_packet_hdr_t *pkt_hdr; + + /* First buffer is the packet descriptor */ + pkt[i] = (odp_packet_t)buf[i * num_seg]; + pkt_hdr = init_segments(&buf[i * num_seg], num_seg); + + packet_init(pkt_hdr, len, parse); + } + + return num; +} + +int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, + odp_packet_t pkt[], int max_num) +{ + pool_t *pool = pool_entry_from_hdl(pool_hdl); + int num, num_seg; + + num_seg = num_segments(len); + num = packet_alloc(pool, len, max_num, num_seg, pkt, 1); + return num; }
odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) { pool_t *pool = pool_entry_from_hdl(pool_hdl); - size_t pkt_size = len ? len : pool->data_size; odp_packet_t pkt; - odp_packet_hdr_t *pkt_hdr; - int ret; + int num, num_seg; + int zero_len = 0;
if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) { __odp_errno = EINVAL; @@ -115,28 +488,32 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) if (odp_unlikely(len > pool->max_len)) return ODP_PACKET_INVALID;
- ret = buffer_alloc_multi(pool, (odp_buffer_t *)&pkt, NULL, 1); - if (ret != 1) + if (odp_unlikely(len == 0)) { + len = pool->data_size; + zero_len = 1; + } + + num_seg = num_segments(len); + num = packet_alloc(pool, len, 1, num_seg, &pkt, 0); + + if (odp_unlikely(num == 0)) return ODP_PACKET_INVALID;
- pkt_hdr = odp_packet_hdr(pkt); - packet_init(pool, pkt_hdr, pkt_size, 0 /* do not parse */); - if (len == 0) - pull_tail(pkt_hdr, pkt_size); + if (odp_unlikely(zero_len)) { + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- if (pkt_hdr->tailroom >= pkt_hdr->buf_hdr.segsize) - pull_tail_seg(pkt_hdr); + pull_tail(pkt_hdr, len); + }
return pkt; }
int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, - odp_packet_t pkt[], int num) + odp_packet_t pkt[], int max_num) { pool_t *pool = pool_entry_from_hdl(pool_hdl); - size_t pkt_size = len ? len : pool->data_size; - int count, i; - odp_packet_hdr_t *pkt_hdrs[num]; + int num, num_seg; + int zero_len = 0;
if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) { __odp_errno = EINVAL; @@ -146,31 +523,75 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, if (odp_unlikely(len > pool->max_len)) return -1;
- count = buffer_alloc_multi(pool, (odp_buffer_t *)pkt, - (odp_buffer_hdr_t **)pkt_hdrs, num); + if (odp_unlikely(len == 0)) { + len = pool->data_size; + zero_len = 1; + } + + num_seg = num_segments(len); + num = packet_alloc(pool, len, max_num, num_seg, pkt, 0);
- for (i = 0; i < count; ++i) { - odp_packet_hdr_t *pkt_hdr = pkt_hdrs[i]; + if (odp_unlikely(zero_len)) { + int i;
- packet_init(pool, pkt_hdr, pkt_size, 0 /* do not parse */); - if (len == 0) - pull_tail(pkt_hdr, pkt_size); + for (i = 0; i < num; i++) { + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt[i]);
- if (pkt_hdr->tailroom >= pkt_hdr->buf_hdr.segsize) - pull_tail_seg(pkt_hdr); + pull_tail(pkt_hdr, len); + } }
- return count; + return num; }
void odp_packet_free(odp_packet_t pkt) { - buffer_free_multi((odp_buffer_t *)&pkt, 1); + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + int num_seg = pkt_hdr->buf_hdr.segcount; + + if (odp_likely(CONFIG_PACKET_MAX_SEGS == 1 || num_seg == 1)) { + buffer_free_multi((odp_buffer_t *)&pkt, 1); + } else { + odp_buffer_t buf[num_seg]; + int i; + + buf[0] = (odp_buffer_t)pkt; + + for (i = 1; i < num_seg; i++) + buf[i] = buffer_handle(pkt_hdr->buf_hdr.seg[i].hdr); + + buffer_free_multi(buf, num_seg); + } }
void odp_packet_free_multi(const odp_packet_t pkt[], int num) { - buffer_free_multi((const odp_buffer_t * const)pkt, num); + if (CONFIG_PACKET_MAX_SEGS == 1) { + buffer_free_multi((const odp_buffer_t * const)pkt, num); + } else { + odp_buffer_t buf[num * CONFIG_PACKET_MAX_SEGS]; + int i, j; + int bufs = 0; + + for (i = 0; i < num; i++) { + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt[i]); + int num_seg = pkt_hdr->buf_hdr.segcount; + odp_buffer_hdr_t *buf_hdr = &pkt_hdr->buf_hdr; + + buf[bufs] = (odp_buffer_t)pkt[i]; + bufs++; + + if (odp_likely(num_seg == 1)) + continue; + + for (j = 1; j < num_seg; j++) { + buf[bufs] = buffer_handle(buf_hdr->seg[j].hdr); + bufs++; + } + } + + buffer_free_multi(buf, bufs); + } }
int odp_packet_reset(odp_packet_t pkt, uint32_t len) @@ -181,7 +602,7 @@ int odp_packet_reset(odp_packet_t pkt, uint32_t len) if (len > pool->headroom + pool->data_size + pool->tailroom) return -1;
- packet_init(pool, pkt_hdr, len, 0); + packet_init(pkt_hdr, len, 0);
return 0; } @@ -217,7 +638,7 @@ void *odp_packet_head(odp_packet_t pkt) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- return buffer_map(&pkt_hdr->buf_hdr, 0, NULL, 0); + return pkt_hdr->buf_hdr.seg[0].data - pkt_hdr->headroom; }
uint32_t odp_packet_buf_len(odp_packet_t pkt) @@ -231,17 +652,14 @@ void *odp_packet_data(odp_packet_t pkt) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- return packet_map(pkt_hdr, 0, NULL); + return packet_data(pkt_hdr); }
uint32_t odp_packet_seg_len(odp_packet_t pkt) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); - uint32_t seglen;
- /* Call returns length of 1st data segment */ - packet_map(pkt_hdr, 0, &seglen); - return seglen; + return packet_first_seg_len(pkt_hdr); }
uint32_t odp_packet_len(odp_packet_t pkt) @@ -263,7 +681,7 @@ void *odp_packet_tail(odp_packet_t pkt) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- return packet_map(pkt_hdr, pkt_hdr->frame_len, NULL); + return packet_tail(pkt_hdr); }
void *odp_packet_push_head(odp_packet_t pkt, uint32_t len) @@ -274,21 +692,38 @@ void *odp_packet_push_head(odp_packet_t pkt, uint32_t len) return NULL;
push_head(pkt_hdr, len); - return packet_map(pkt_hdr, 0, NULL); + return packet_data(pkt_hdr); }
int odp_packet_extend_head(odp_packet_t *pkt, uint32_t len, void **data_ptr, uint32_t *seg_len) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(*pkt); + odp_packet_hdr_t *new_hdr; + uint32_t headroom = pkt_hdr->headroom;
- if (len > pkt_hdr->headroom && push_head_seg(pkt_hdr, len)) - return -1; + if (len > headroom) { + push_head(pkt_hdr, headroom); + new_hdr = add_segments(pkt_hdr, len - headroom, 1);
- push_head(pkt_hdr, len); + if (new_hdr == NULL) { + /* segment alloc failed, rollback changes */ + pull_head(pkt_hdr, headroom); + return -1; + } + + *pkt = packet_handle(new_hdr); + pkt_hdr = new_hdr; + } else { + push_head(pkt_hdr, len); + }
if (data_ptr) - *data_ptr = packet_map(pkt_hdr, 0, seg_len); + *data_ptr = packet_data(pkt_hdr); + + if (seg_len) + *seg_len = packet_first_seg_len(pkt_hdr); + return 0; }
@@ -300,51 +735,82 @@ void *odp_packet_pull_head(odp_packet_t pkt, uint32_t len) return NULL;
pull_head(pkt_hdr, len); - return packet_map(pkt_hdr, 0, NULL); + return packet_data(pkt_hdr); }
int odp_packet_trunc_head(odp_packet_t *pkt, uint32_t len, - void **data_ptr, uint32_t *seg_len) + void **data_ptr, uint32_t *seg_len_out) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(*pkt); + uint32_t seg_len = packet_first_seg_len(pkt_hdr);
if (len > pkt_hdr->frame_len) return -1;
- pull_head(pkt_hdr, len); - if (pkt_hdr->headroom >= pkt_hdr->buf_hdr.segsize) - pull_head_seg(pkt_hdr); + if (len < seg_len) { + pull_head(pkt_hdr, len); + } else if (CONFIG_PACKET_MAX_SEGS != 1) { + int num = 0; + uint32_t pull_len = 0; + + while (seg_len <= len) { + pull_len = len - seg_len; + num++; + seg_len += packet_seg_len(pkt_hdr, num); + } + + pkt_hdr = free_segments(pkt_hdr, num, len - pull_len, + pull_len, 1); + *pkt = packet_handle(pkt_hdr); + }
if (data_ptr) - *data_ptr = packet_map(pkt_hdr, 0, seg_len); + *data_ptr = packet_data(pkt_hdr); + + if (seg_len_out) + *seg_len_out = packet_first_seg_len(pkt_hdr); + return 0; }
void *odp_packet_push_tail(odp_packet_t pkt, uint32_t len) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); - uint32_t origin = pkt_hdr->frame_len; + void *old_tail;
if (len > pkt_hdr->tailroom) return NULL;
+ old_tail = packet_tail(pkt_hdr); push_tail(pkt_hdr, len); - return packet_map(pkt_hdr, origin, NULL); + + return old_tail; }
int odp_packet_extend_tail(odp_packet_t *pkt, uint32_t len, void **data_ptr, uint32_t *seg_len) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(*pkt); - uint32_t origin = pkt_hdr->frame_len; + void *ret; + uint32_t tailroom = pkt_hdr->tailroom; + uint32_t tail_off = pkt_hdr->frame_len;
- if (len > pkt_hdr->tailroom && push_tail_seg(pkt_hdr, len)) - return -1; + if (len > tailroom) { + push_tail(pkt_hdr, tailroom); + ret = add_segments(pkt_hdr, len - tailroom, 0);
- push_tail(pkt_hdr, len); + if (ret == NULL) { + /* segment alloc failed, rollback changes */ + pull_tail(pkt_hdr, tailroom); + return -1; + } + } else { + push_tail(pkt_hdr, len); + }
if (data_ptr) - *data_ptr = packet_map(pkt_hdr, origin, seg_len); + *data_ptr = packet_map(pkt_hdr, tail_off, seg_len, NULL); + return 0; }
@@ -352,27 +818,45 @@ void *odp_packet_pull_tail(odp_packet_t pkt, uint32_t len) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- if (len > pkt_hdr->frame_len) + if (len > packet_last_seg_len(pkt_hdr)) return NULL;
pull_tail(pkt_hdr, len); - return packet_map(pkt_hdr, pkt_hdr->frame_len, NULL); + + return packet_tail(pkt_hdr); }
int odp_packet_trunc_tail(odp_packet_t *pkt, uint32_t len, void **tail_ptr, uint32_t *tailroom) { + int last; + uint32_t seg_len; odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(*pkt);
if (len > pkt_hdr->frame_len) return -1;
- pull_tail(pkt_hdr, len); - if (pkt_hdr->tailroom >= pkt_hdr->buf_hdr.segsize) - pull_tail_seg(pkt_hdr); + last = packet_last_seg(pkt_hdr); + seg_len = packet_seg_len(pkt_hdr, last); + + if (len < seg_len) { + pull_tail(pkt_hdr, len); + } else if (CONFIG_PACKET_MAX_SEGS != 1) { + int num = 0; + uint32_t pull_len = 0; + + while (seg_len <= len) { + pull_len = len - seg_len; + num++; + seg_len += packet_seg_len(pkt_hdr, last - num); + } + + free_segments(pkt_hdr, num, len - pull_len, pull_len, 0); + }
if (tail_ptr) - *tail_ptr = packet_map(pkt_hdr, pkt_hdr->frame_len, NULL); + *tail_ptr = packet_tail(pkt_hdr); + if (tailroom) *tailroom = pkt_hdr->tailroom; return 0; @@ -381,11 +865,12 @@ int odp_packet_trunc_tail(odp_packet_t *pkt, uint32_t len, void *odp_packet_offset(odp_packet_t pkt, uint32_t offset, uint32_t *len, odp_packet_seg_t *seg) { + int seg_idx; odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); - void *addr = packet_map(pkt_hdr, offset, len); + void *addr = packet_map(pkt_hdr, offset, len, &seg_idx);
if (addr != NULL && seg != NULL) - *seg = (odp_packet_seg_t)pkt; + *seg = seg_idx;
return addr; } @@ -445,7 +930,7 @@ void *odp_packet_l2_ptr(odp_packet_t pkt, uint32_t *len)
if (!packet_hdr_has_l2(pkt_hdr)) return NULL; - return packet_map(pkt_hdr, pkt_hdr->p.l2_offset, len); + return packet_map(pkt_hdr, pkt_hdr->p.l2_offset, len, NULL); }
uint32_t odp_packet_l2_offset(odp_packet_t pkt) @@ -475,7 +960,7 @@ void *odp_packet_l3_ptr(odp_packet_t pkt, uint32_t *len)
if (pkt_hdr->p.parsed_layers < LAYER_L3) packet_parse_layer(pkt_hdr, LAYER_L3); - return packet_map(pkt_hdr, pkt_hdr->p.l3_offset, len); + return packet_map(pkt_hdr, pkt_hdr->p.l3_offset, len, NULL); }
uint32_t odp_packet_l3_offset(odp_packet_t pkt) @@ -506,7 +991,7 @@ void *odp_packet_l4_ptr(odp_packet_t pkt, uint32_t *len)
if (pkt_hdr->p.parsed_layers < LAYER_L4) packet_parse_layer(pkt_hdr, LAYER_L4); - return packet_map(pkt_hdr, pkt_hdr->p.l4_offset, len); + return packet_map(pkt_hdr, pkt_hdr->p.l4_offset, len, NULL); }
uint32_t odp_packet_l4_offset(odp_packet_t pkt) @@ -568,29 +1053,33 @@ int odp_packet_is_segmented(odp_packet_t pkt)
int odp_packet_num_segs(odp_packet_t pkt) { - return odp_packet_hdr(pkt)->buf_hdr.segcount; + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + + return pkt_hdr->buf_hdr.segcount; }
odp_packet_seg_t odp_packet_first_seg(odp_packet_t pkt) { - return (odp_packet_seg_t)pkt; + (void)pkt; + + return 0; }
odp_packet_seg_t odp_packet_last_seg(odp_packet_t pkt) { - (void)pkt; + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- /* Only one segment */ - return (odp_packet_seg_t)pkt; + return packet_last_seg(pkt_hdr); }
odp_packet_seg_t odp_packet_next_seg(odp_packet_t pkt, odp_packet_seg_t seg) { - (void)pkt; - (void)seg; + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- /* Only one segment */ - return ODP_PACKET_SEG_INVALID; + if (odp_unlikely(seg >= (odp_packet_seg_t)packet_last_seg(pkt_hdr))) + return ODP_PACKET_SEG_INVALID; + + return seg + 1; }
/* @@ -602,18 +1091,22 @@ odp_packet_seg_t odp_packet_next_seg(odp_packet_t pkt, odp_packet_seg_t seg)
void *odp_packet_seg_data(odp_packet_t pkt, odp_packet_seg_t seg) { - (void)seg; + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- /* Only one segment */ - return odp_packet_data(pkt); + if (odp_unlikely(seg >= pkt_hdr->buf_hdr.segcount)) + return NULL; + + return packet_seg_data(pkt_hdr, seg); }
uint32_t odp_packet_seg_data_len(odp_packet_t pkt, odp_packet_seg_t seg) { - (void)seg; + odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt);
- /* Only one segment */ - return odp_packet_seg_len(pkt); + if (odp_unlikely(seg >= pkt_hdr->buf_hdr.segcount)) + return 0; + + return packet_seg_len(pkt_hdr, seg); }
/* @@ -688,7 +1181,7 @@ int odp_packet_align(odp_packet_t *pkt, uint32_t offset, uint32_t len, uint32_t shift; uint32_t seglen = 0; /* GCC */ odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(*pkt); - void *addr = packet_map(pkt_hdr, offset, &seglen); + void *addr = packet_map(pkt_hdr, offset, &seglen, NULL); uint64_t uaddr = (uint64_t)(uintptr_t)addr; uint64_t misalign;
@@ -733,6 +1226,7 @@ int odp_packet_concat(odp_packet_t *dst, odp_packet_t src) src, 0, src_len); if (src != *dst) odp_packet_free(src); + return 0; }
@@ -808,7 +1302,7 @@ int odp_packet_copy_to_mem(odp_packet_t pkt, uint32_t offset, return -1;
while (len > 0) { - mapaddr = packet_map(pkt_hdr, offset, &seglen); + mapaddr = packet_map(pkt_hdr, offset, &seglen, NULL); cpylen = len > seglen ? seglen : len; memcpy(dstaddr, mapaddr, cpylen); offset += cpylen; @@ -832,7 +1326,7 @@ int odp_packet_copy_from_mem(odp_packet_t pkt, uint32_t offset, return -1;
while (len > 0) { - mapaddr = packet_map(pkt_hdr, offset, &seglen); + mapaddr = packet_map(pkt_hdr, offset, &seglen, NULL); cpylen = len > seglen ? seglen : len; memcpy(mapaddr, srcaddr, cpylen); offset += cpylen; @@ -878,8 +1372,8 @@ int odp_packet_copy_from_pkt(odp_packet_t dst, uint32_t dst_offset, }
while (len > 0) { - dst_map = packet_map(dst_hdr, dst_offset, &dst_seglen); - src_map = packet_map(src_hdr, src_offset, &src_seglen); + dst_map = packet_map(dst_hdr, dst_offset, &dst_seglen, NULL); + src_map = packet_map(src_hdr, src_offset, &src_seglen, NULL);
minseg = dst_seglen > src_seglen ? src_seglen : dst_seglen; cpylen = len > minseg ? minseg : len; @@ -1364,8 +1858,8 @@ parse_exit: */ int packet_parse_layer(odp_packet_hdr_t *pkt_hdr, layer_t layer) { - uint32_t seg_len; - void *base = packet_map(pkt_hdr, 0, &seg_len); + uint32_t seg_len = packet_first_seg_len(pkt_hdr); + void *base = packet_data(pkt_hdr);
return packet_parse_common(&pkt_hdr->p, base, pkt_hdr->frame_len, seg_len, layer); diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 364df97..7c462e5 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -32,6 +32,9 @@ ODP_STATIC_ASSERT(CONFIG_POOL_CACHE_SIZE > (2 * CACHE_BURST), "cache_burst_size_too_large_compared_to_cache_size");
+ODP_STATIC_ASSERT(CONFIG_PACKET_SEG_LEN_MIN >= 256, + "ODP Segment size must be a minimum of 256 bytes"); + /* Thread local variables */ typedef struct pool_local_t { pool_cache_t *cache[ODP_CONFIG_POOLS]; @@ -46,6 +49,14 @@ static inline odp_pool_t pool_index_to_handle(uint32_t pool_idx) return _odp_cast_scalar(odp_pool_t, pool_idx); }
+static inline uint32_t pool_id_from_buf(odp_buffer_t buf) +{ + odp_buffer_bits_t handle; + + handle.handle = buf; + return handle.pool_id; +} + int odp_pool_init_global(void) { uint32_t i; @@ -198,7 +209,7 @@ static void init_buffers(pool_t *pool) ring_t *ring; uint32_t mask; int type; - uint32_t size; + uint32_t seg_size;
ring = &pool->ring.hdr; mask = pool->ring_mask; @@ -223,12 +234,12 @@ static void init_buffers(pool_t *pool) while (((uintptr_t)&data[offset]) % pool->align != 0) offset++;
- memset(buf_hdr, 0, sizeof(odp_buffer_hdr_t)); + memset(buf_hdr, 0, (uintptr_t)data - (uintptr_t)buf_hdr);
- size = pool->headroom + pool->data_size + pool->tailroom; + seg_size = pool->headroom + pool->data_size + pool->tailroom;
/* Initialize buffer metadata */ - buf_hdr->size = size; + buf_hdr->size = seg_size; buf_hdr->type = type; buf_hdr->event_type = type; buf_hdr->pool_hdl = pool->pool_hdl; @@ -236,10 +247,18 @@ static void init_buffers(pool_t *pool) /* Show user requested size through API */ buf_hdr->uarea_size = pool->params.pkt.uarea_size; buf_hdr->segcount = 1; - buf_hdr->segsize = size; + buf_hdr->segsize = seg_size;
/* Pointer to data start (of the first segment) */ - buf_hdr->addr[0] = &data[offset]; + buf_hdr->seg[0].hdr = buf_hdr; + buf_hdr->seg[0].data = &data[offset]; + buf_hdr->seg[0].len = pool->data_size; + + /* Store base values for fast init */ + buf_hdr->base_data = buf_hdr->seg[0].data; + buf_hdr->base_len = buf_hdr->seg[0].len; + buf_hdr->buf_end = &data[offset + pool->data_size + + pool->tailroom];
buf_hdl = form_buffer_handle(pool->pool_idx, i); buf_hdr->handle.handle = buf_hdl; @@ -296,25 +315,13 @@ static odp_pool_t pool_create(const char *name, odp_pool_param_t *params, break;
case ODP_POOL_PACKET: - headroom = ODP_CONFIG_PACKET_HEADROOM; - tailroom = ODP_CONFIG_PACKET_TAILROOM; - num = params->pkt.num; - uarea_size = params->pkt.uarea_size; - - data_size = ODP_CONFIG_PACKET_SEG_LEN_MAX; - - if (data_size < ODP_CONFIG_PACKET_SEG_LEN_MIN) - data_size = ODP_CONFIG_PACKET_SEG_LEN_MIN; - - if (data_size > ODP_CONFIG_PACKET_SEG_LEN_MAX) { - ODP_ERR("Too large seg len requirement"); - return ODP_POOL_INVALID; - } - - max_seg_len = ODP_CONFIG_PACKET_SEG_LEN_MAX - - ODP_CONFIG_PACKET_HEADROOM - - ODP_CONFIG_PACKET_TAILROOM; - max_len = ODP_CONFIG_PACKET_MAX_SEGS * max_seg_len; + headroom = CONFIG_PACKET_HEADROOM; + tailroom = CONFIG_PACKET_TAILROOM; + num = params->pkt.num; + uarea_size = params->pkt.uarea_size; + data_size = CONFIG_PACKET_MAX_SEG_LEN; + max_seg_len = CONFIG_PACKET_MAX_SEG_LEN; + max_len = CONFIG_PACKET_MAX_SEGS * max_seg_len; break;
case ODP_POOL_TIMEOUT: @@ -470,31 +477,6 @@ void _odp_buffer_event_type_set(odp_buffer_t buf, int ev) buf_hdl_to_hdr(buf)->event_type = ev; }
-void *buffer_map(odp_buffer_hdr_t *buf, - uint32_t offset, - uint32_t *seglen, - uint32_t limit) -{ - int seg_index; - int seg_offset; - - if (odp_likely(offset < buf->segsize)) { - seg_index = 0; - seg_offset = offset; - } else { - ODP_ERR("\nSEGMENTS NOT SUPPORTED\n"); - return NULL; - } - - if (seglen != NULL) { - uint32_t buf_left = limit - offset; - *seglen = seg_offset + buf_left <= buf->segsize ? - buf_left : buf->segsize - seg_offset; - } - - return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); -} - odp_pool_t odp_pool_lookup(const char *name) { uint32_t i; @@ -727,9 +709,7 @@ void odp_buffer_free_multi(const odp_buffer_t buf[], int num)
int odp_pool_capability(odp_pool_capability_t *capa) { - uint32_t max_len = ODP_CONFIG_PACKET_SEG_LEN_MAX - - ODP_CONFIG_PACKET_HEADROOM - - ODP_CONFIG_PACKET_TAILROOM; + uint32_t max_seg_len = CONFIG_PACKET_MAX_SEG_LEN;
memset(capa, 0, sizeof(odp_pool_capability_t));
@@ -743,13 +723,13 @@ int odp_pool_capability(odp_pool_capability_t *capa)
/* Packet pools */ capa->pkt.max_pools = ODP_CONFIG_POOLS; - capa->pkt.max_len = ODP_CONFIG_PACKET_MAX_SEGS * max_len; + capa->pkt.max_len = CONFIG_PACKET_MAX_SEGS * max_seg_len; capa->pkt.max_num = CONFIG_POOL_MAX_NUM; - capa->pkt.min_headroom = ODP_CONFIG_PACKET_HEADROOM; - capa->pkt.min_tailroom = ODP_CONFIG_PACKET_TAILROOM; - capa->pkt.max_segs_per_pkt = ODP_CONFIG_PACKET_MAX_SEGS; - capa->pkt.min_seg_len = max_len; - capa->pkt.max_seg_len = max_len; + capa->pkt.min_headroom = CONFIG_PACKET_HEADROOM; + capa->pkt.min_tailroom = CONFIG_PACKET_TAILROOM; + capa->pkt.max_segs_per_pkt = CONFIG_PACKET_MAX_SEGS; + capa->pkt.min_seg_len = max_seg_len; + capa->pkt.max_seg_len = max_seg_len; capa->pkt.max_uarea_size = 0;
/* Timeout pools */ @@ -765,7 +745,7 @@ void odp_pool_print(odp_pool_t pool_hdl)
pool = pool_entry_from_hdl(pool_hdl);
- printf("Pool info\n"); + printf("\nPool info\n"); printf("---------\n"); printf(" pool %" PRIu64 "\n", odp_pool_to_u64(pool->pool_hdl)); @@ -812,19 +792,6 @@ uint64_t odp_pool_to_u64(odp_pool_t hdl) return _odp_pri(hdl); }
-int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount) -{ - (void)buf_hdr; - (void)segcount; - return 0; -} - -void seg_free_head(odp_buffer_hdr_t *buf_hdr, int segcount) -{ - (void)buf_hdr; - (void)segcount; -} - int seg_alloc_tail(odp_buffer_hdr_t *buf_hdr, int segcount) { (void)buf_hdr; @@ -855,13 +822,3 @@ int odp_buffer_is_valid(odp_buffer_t buf)
return 1; } - -uint32_t pool_headroom(odp_pool_t pool) -{ - return pool_entry_from_hdl(pool)->headroom; -} - -uint32_t pool_tailroom(odp_pool_t pool) -{ - return pool_entry_from_hdl(pool)->tailroom; -} diff --git a/platform/linux-generic/pktio/netmap.c b/platform/linux-generic/pktio/netmap.c index cf67741..8eb8145 100644 --- a/platform/linux-generic/pktio/netmap.c +++ b/platform/linux-generic/pktio/netmap.c @@ -345,9 +345,7 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED, pktio_entry_t *pktio_entry, pkt_nm->pool = pool;
/* max frame len taking into account the l2-offset */ - pkt_nm->max_frame_len = ODP_CONFIG_PACKET_BUF_LEN_MAX - - pool_headroom(pool) - - pool_tailroom(pool); + pkt_nm->max_frame_len = CONFIG_PACKET_MAX_SEG_LEN;
/* allow interface to be opened with or without the 'netmap:' prefix */ prefix = "netmap:"; diff --git a/platform/linux-generic/pktio/socket.c b/platform/linux-generic/pktio/socket.c index ab25aab..9fe4a7e 100644 --- a/platform/linux-generic/pktio/socket.c +++ b/platform/linux-generic/pktio/socket.c @@ -46,7 +46,8 @@ #include <protocols/eth.h> #include <protocols/ip.h>
-#define MAX_SEGS ODP_CONFIG_PACKET_MAX_SEGS +#define MAX_SEGS CONFIG_PACKET_MAX_SEGS +#define PACKET_JUMBO_LEN (9 * 1024)
static int disable_pktio; /** !0 this pktio disabled, 0 enabled */
commit 7bcd45812c020a9e67cf4d848e5bbef1384f58af Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:36 2016 +0200
test: validation: packet: fix bugs in tailroom and concat tests
Tailroom test did not call odp_packet_extend_tail() since it pushed tail too few bytes. Corrected the test to extend the tail by 100 bytes.
Concat test did pass the same packet as src and dst packets. There's no valid use case to concatenate a packet into itself (forms a loop). Corrected the test to concatenate two copies of the same packet.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/packet/packet.c b/test/common_plat/validation/api/packet/packet.c index 87a0662..b082add 100644 --- a/test/common_plat/validation/api/packet/packet.c +++ b/test/common_plat/validation/api/packet/packet.c @@ -665,9 +665,10 @@ void packet_test_tailroom(void) _verify_tailroom_shift(&pkt, 0);
if (segmentation_supported) { - _verify_tailroom_shift(&pkt, pull_val); + push_val = room + 100; + _verify_tailroom_shift(&pkt, push_val); _verify_tailroom_shift(&pkt, 0); - _verify_tailroom_shift(&pkt, -pull_val); + _verify_tailroom_shift(&pkt, -push_val); }
odp_packet_free(pkt); @@ -1157,12 +1158,18 @@ void packet_test_concatsplit(void) odp_packet_t pkt, pkt2; uint32_t pkt_len; odp_packet_t splits[4]; + odp_pool_t pool;
- pkt = odp_packet_copy(test_packet, odp_packet_pool(test_packet)); + pool = odp_packet_pool(test_packet); + pkt = odp_packet_copy(test_packet, pool); + pkt2 = odp_packet_copy(test_packet, pool); pkt_len = odp_packet_len(test_packet); CU_ASSERT_FATAL(pkt != ODP_PACKET_INVALID); + CU_ASSERT_FATAL(pkt2 != ODP_PACKET_INVALID); + CU_ASSERT(pkt_len == odp_packet_len(pkt)); + CU_ASSERT(pkt_len == odp_packet_len(pkt2));
- CU_ASSERT(odp_packet_concat(&pkt, pkt) == 0); + CU_ASSERT(odp_packet_concat(&pkt, pkt2) == 0); CU_ASSERT(odp_packet_len(pkt) == pkt_len * 2); _packet_compare_offset(pkt, 0, pkt, pkt_len, pkt_len);
commit 9e95d6a1a9025bfafb846b4d805b1dc146657a10 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:35 2016 +0200
test: correctly initialize pool parameters
Use odp_pool_param_init() to initialize pool parameters. Also pktio test must use capability to determine maximum packet segment length.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/example/generator/odp_generator.c b/example/generator/odp_generator.c index 1c64765..ccd47f6 100644 --- a/example/generator/odp_generator.c +++ b/example/generator/odp_generator.c @@ -732,7 +732,7 @@ int main(int argc, char *argv[]) odp_timer_pool_start();
/* Create timeout pool */ - memset(¶ms, 0, sizeof(params)); + odp_pool_param_init(¶ms); params.tmo.num = tparams.num_timers; /* One timeout per timer */ params.type = ODP_POOL_TIMEOUT;
diff --git a/test/common_plat/validation/api/crypto/crypto.c b/test/common_plat/validation/api/crypto/crypto.c index 8946cde..9c9a00d 100644 --- a/test/common_plat/validation/api/crypto/crypto.c +++ b/test/common_plat/validation/api/crypto/crypto.c @@ -43,7 +43,7 @@ int crypto_init(odp_instance_t *inst) return -1; }
- memset(¶ms, 0, sizeof(params)); + odp_pool_param_init(¶ms); params.pkt.seg_len = SHM_PKT_POOL_BUF_SIZE; params.pkt.len = SHM_PKT_POOL_BUF_SIZE; params.pkt.num = SHM_PKT_POOL_SIZE / SHM_PKT_POOL_BUF_SIZE; diff --git a/test/common_plat/validation/api/pktio/pktio.c b/test/common_plat/validation/api/pktio/pktio.c index 23ecc4a..edabd01 100644 --- a/test/common_plat/validation/api/pktio/pktio.c +++ b/test/common_plat/validation/api/pktio/pktio.c @@ -317,7 +317,7 @@ static int default_pool_create(void) if (default_pkt_pool != ODP_POOL_INVALID) return -1;
- memset(¶ms, 0, sizeof(params)); + odp_pool_param_init(¶ms); set_pool_len(¶ms); params.pkt.num = PKT_BUF_NUM; params.type = ODP_POOL_PACKET; @@ -1676,10 +1676,11 @@ int pktio_check_send_failure(void)
odp_pktio_close(pktio_tx);
- if (mtu <= pool_capa.pkt.max_len - 32) - return ODP_TEST_ACTIVE; + /* Failure test supports only single segment */ + if (pool_capa.pkt.max_seg_len < mtu + 32) + return ODP_TEST_INACTIVE;
- return ODP_TEST_INACTIVE; + return ODP_TEST_ACTIVE; }
void pktio_test_send_failure(void) @@ -1694,6 +1695,7 @@ void pktio_test_send_failure(void) int long_pkt_idx = TX_BATCH_LEN / 2; pktio_info_t info_rx; odp_pktout_queue_t pktout; + odp_pool_capability_t pool_capa;
pktio_tx = create_pktio(0, ODP_PKTIN_MODE_DIRECT, ODP_PKTOUT_MODE_DIRECT); @@ -1712,9 +1714,16 @@ void pktio_test_send_failure(void)
_pktio_wait_linkup(pktio_tx);
+ CU_ASSERT_FATAL(odp_pool_capability(&pool_capa) == 0); + + if (pool_capa.pkt.max_seg_len < mtu + 32) { + CU_FAIL("Max packet seg length is too small."); + return; + } + /* configure the pool so that we can generate test packets larger * than the interface MTU */ - memset(&pool_params, 0, sizeof(pool_params)); + odp_pool_param_init(&pool_params); pool_params.pkt.len = mtu + 32; pool_params.pkt.seg_len = pool_params.pkt.len; pool_params.pkt.num = TX_BATCH_LEN + 1; @@ -2003,7 +2012,7 @@ static int create_pool(const char *iface, int num) char pool_name[ODP_POOL_NAME_LEN]; odp_pool_param_t params;
- memset(¶ms, 0, sizeof(params)); + odp_pool_param_init(¶ms); set_pool_len(¶ms); params.pkt.num = PKT_BUF_NUM; params.type = ODP_POOL_PACKET;
commit bd4a9492d862e0636fba99bd76aaa19952de2f44 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:34 2016 +0200
test: performance: crypto: use capability to select max packet
Applications must use pool capabilibty to check maximum values for parameters. Used maximum segment length since application seems to support only single segment packets.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/performance/odp_crypto.c b/test/common_plat/performance/odp_crypto.c index 49a9f4b..39df78b 100644 --- a/test/common_plat/performance/odp_crypto.c +++ b/test/common_plat/performance/odp_crypto.c @@ -23,15 +23,10 @@ fprintf(stderr, "%s:%d:%s(): Error: " fmt, __FILE__, \ __LINE__, __func__, ##__VA_ARGS__)
-/** @def SHM_PKT_POOL_SIZE - * @brief Size of the shared memory block +/** @def POOL_NUM_PKT + * Number of packets in the pool */ -#define SHM_PKT_POOL_SIZE (512 * 2048 * 2) - -/** @def SHM_PKT_POOL_BUF_SIZE - * @brief Buffer size of the packet pool buffer - */ -#define SHM_PKT_POOL_BUF_SIZE (1024 * 32) +#define POOL_NUM_PKT 64
static uint8_t test_iv[8] = "01234567";
@@ -165,9 +160,7 @@ static void parse_args(int argc, char *argv[], crypto_args_t *cargs); static void usage(char *progname);
/** - * Set of predefined payloads. Make sure that maximum payload - * size is not bigger than SHM_PKT_POOL_BUF_SIZE. May relax when - * implementation start support segmented buffers/packets. + * Set of predefined payloads. */ static unsigned int payloads[] = { 16, @@ -178,6 +171,9 @@ static unsigned int payloads[] = { 16384 };
+/** Number of payloads used in the test */ +static unsigned num_payloads; + /** * Set of known algorithms to test */ @@ -680,12 +676,10 @@ run_measure_one_config(crypto_args_t *cargs, config, &result); } } else { - unsigned int i; + unsigned i;
print_result_header(); - for (i = 0; - i < (sizeof(payloads) / sizeof(unsigned int)); - i++) { + for (i = 0; i < num_payloads; i++) { rc = run_measure_one(cargs, config, &session, payloads[i], &result); if (rc) @@ -728,6 +722,9 @@ int main(int argc, char *argv[]) int num_workers = 1; odph_odpthread_t thr[num_workers]; odp_instance_t instance; + odp_pool_capability_t capa; + uint32_t max_seg_len; + unsigned i;
memset(&cargs, 0, sizeof(cargs));
@@ -743,11 +740,25 @@ int main(int argc, char *argv[]) /* Init this thread */ odp_init_local(instance, ODP_THREAD_WORKER);
+ if (odp_pool_capability(&capa)) { + app_err("Pool capability request failed.\n"); + exit(EXIT_FAILURE); + } + + max_seg_len = capa.pkt.max_seg_len; + + for (i = 0; i < sizeof(payloads) / sizeof(unsigned int); i++) { + if (payloads[i] > max_seg_len) + break; + } + + num_payloads = i; + /* Create packet pool */ odp_pool_param_init(¶ms); - params.pkt.seg_len = SHM_PKT_POOL_BUF_SIZE; - params.pkt.len = SHM_PKT_POOL_BUF_SIZE; - params.pkt.num = SHM_PKT_POOL_SIZE / SHM_PKT_POOL_BUF_SIZE; + params.pkt.seg_len = max_seg_len; + params.pkt.len = max_seg_len; + params.pkt.num = POOL_NUM_PKT; params.type = ODP_POOL_PACKET; pool = odp_pool_create("packet_pool", ¶ms);
commit d858987a1dfea90625389ab5a8e14379a23dbb52 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:33 2016 +0200
test: validation: buf: test alignment
Added checks for correct alignment. Also updated tests to call odp_pool_param_init() for parameter initialization.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/test/common_plat/validation/api/buffer/buffer.c b/test/common_plat/validation/api/buffer/buffer.c index d26d5e8..7c723d4 100644 --- a/test/common_plat/validation/api/buffer/buffer.c +++ b/test/common_plat/validation/api/buffer/buffer.c @@ -8,20 +8,21 @@ #include "odp_cunit_common.h" #include "buffer.h"
+#define BUF_ALIGN ODP_CACHE_LINE_SIZE +#define BUF_SIZE 1500 + static odp_pool_t raw_pool; static odp_buffer_t raw_buffer = ODP_BUFFER_INVALID; -static const size_t raw_buffer_size = 1500;
int buffer_suite_init(void) { - odp_pool_param_t params = { - .buf = { - .size = raw_buffer_size, - .align = ODP_CACHE_LINE_SIZE, - .num = 100, - }, - .type = ODP_POOL_BUFFER, - }; + odp_pool_param_t params; + + odp_pool_param_init(¶ms); + params.type = ODP_POOL_BUFFER; + params.buf.size = BUF_SIZE; + params.buf.align = BUF_ALIGN; + params.buf.num = 100;
raw_pool = odp_pool_create("raw_pool", ¶ms); if (raw_pool == ODP_POOL_INVALID) @@ -44,25 +45,25 @@ void buffer_test_pool_alloc(void) { odp_pool_t pool; const int num = 3; - const size_t size = 1500; odp_buffer_t buffer[num]; odp_event_t ev; int index; - char wrong_type = 0, wrong_size = 0; - odp_pool_param_t params = { - .buf = { - .size = size, - .align = ODP_CACHE_LINE_SIZE, - .num = num, - }, - .type = ODP_POOL_BUFFER, - }; + char wrong_type = 0, wrong_size = 0, wrong_align = 0; + odp_pool_param_t params; + + odp_pool_param_init(¶ms); + params.type = ODP_POOL_BUFFER; + params.buf.size = BUF_SIZE; + params.buf.align = BUF_ALIGN; + params.buf.num = num;
pool = odp_pool_create("buffer_pool_alloc", ¶ms); odp_pool_print(pool);
/* Try to allocate num items from the pool */ for (index = 0; index < num; index++) { + uintptr_t addr; + buffer[index] = odp_buffer_alloc(pool);
if (buffer[index] == ODP_BUFFER_INVALID) @@ -71,9 +72,15 @@ void buffer_test_pool_alloc(void) ev = odp_buffer_to_event(buffer[index]); if (odp_event_type(ev) != ODP_EVENT_BUFFER) wrong_type = 1; - if (odp_buffer_size(buffer[index]) < size) + if (odp_buffer_size(buffer[index]) < BUF_SIZE) wrong_size = 1; - if (wrong_type || wrong_size) + + addr = (uintptr_t)odp_buffer_addr(buffer[index]); + + if ((addr % BUF_ALIGN) != 0) + wrong_align = 1; + + if (wrong_type || wrong_size || wrong_align) odp_buffer_print(buffer[index]); }
@@ -85,6 +92,7 @@ void buffer_test_pool_alloc(void) /* Check that the pool had correct buffers */ CU_ASSERT(wrong_type == 0); CU_ASSERT(wrong_size == 0); + CU_ASSERT(wrong_align == 0);
for (; index >= 0; index--) odp_buffer_free(buffer[index]); @@ -112,19 +120,17 @@ void buffer_test_pool_alloc_multi(void) { odp_pool_t pool; const int num = 3; - const size_t size = 1500; odp_buffer_t buffer[num + 1]; odp_event_t ev; int index; - char wrong_type = 0, wrong_size = 0; - odp_pool_param_t params = { - .buf = { - .size = size, - .align = ODP_CACHE_LINE_SIZE, - .num = num, - }, - .type = ODP_POOL_BUFFER, - }; + char wrong_type = 0, wrong_size = 0, wrong_align = 0; + odp_pool_param_t params; + + odp_pool_param_init(¶ms); + params.type = ODP_POOL_BUFFER; + params.buf.size = BUF_SIZE; + params.buf.align = BUF_ALIGN; + params.buf.num = num;
pool = odp_pool_create("buffer_pool_alloc_multi", ¶ms); odp_pool_print(pool); @@ -133,15 +139,23 @@ void buffer_test_pool_alloc_multi(void) CU_ASSERT_FATAL(buffer_alloc_multi(pool, buffer, num + 1) == num);
for (index = 0; index < num; index++) { + uintptr_t addr; + if (buffer[index] == ODP_BUFFER_INVALID) break;
ev = odp_buffer_to_event(buffer[index]); if (odp_event_type(ev) != ODP_EVENT_BUFFER) wrong_type = 1; - if (odp_buffer_size(buffer[index]) < size) + if (odp_buffer_size(buffer[index]) < BUF_SIZE) wrong_size = 1; - if (wrong_type || wrong_size) + + addr = (uintptr_t)odp_buffer_addr(buffer[index]); + + if ((addr % BUF_ALIGN) != 0) + wrong_align = 1; + + if (wrong_type || wrong_size || wrong_align) odp_buffer_print(buffer[index]); }
@@ -151,6 +165,7 @@ void buffer_test_pool_alloc_multi(void) /* Check that the pool had correct buffers */ CU_ASSERT(wrong_type == 0); CU_ASSERT(wrong_size == 0); + CU_ASSERT(wrong_align == 0);
odp_buffer_free_multi(buffer, num);
@@ -161,14 +176,13 @@ void buffer_test_pool_free(void) { odp_pool_t pool; odp_buffer_t buffer; - odp_pool_param_t params = { - .buf = { - .size = 64, - .align = ODP_CACHE_LINE_SIZE, - .num = 1, - }, - .type = ODP_POOL_BUFFER, - }; + odp_pool_param_t params; + + odp_pool_param_init(¶ms); + params.type = ODP_POOL_BUFFER; + params.buf.size = 64; + params.buf.align = BUF_ALIGN; + params.buf.num = 1;
pool = odp_pool_create("buffer_pool_free", ¶ms);
@@ -194,14 +208,13 @@ void buffer_test_pool_free_multi(void) odp_pool_t pool[2]; odp_buffer_t buffer[4]; odp_buffer_t buf_inval[2]; - odp_pool_param_t params = { - .buf = { - .size = 64, - .align = ODP_CACHE_LINE_SIZE, - .num = 2, - }, - .type = ODP_POOL_BUFFER, - }; + odp_pool_param_t params; + + odp_pool_param_init(¶ms); + params.type = ODP_POOL_BUFFER; + params.buf.size = 64; + params.buf.align = BUF_ALIGN; + params.buf.num = 2;
pool[0] = odp_pool_create("buffer_pool_free_multi_0", ¶ms); pool[1] = odp_pool_create("buffer_pool_free_multi_1", ¶ms); @@ -235,7 +248,7 @@ void buffer_test_management_basic(void) CU_ASSERT(odp_buffer_is_valid(raw_buffer) == 1); CU_ASSERT(odp_buffer_pool(raw_buffer) != ODP_POOL_INVALID); CU_ASSERT(odp_event_type(ev) == ODP_EVENT_BUFFER); - CU_ASSERT(odp_buffer_size(raw_buffer) >= raw_buffer_size); + CU_ASSERT(odp_buffer_size(raw_buffer) >= BUF_SIZE); CU_ASSERT(odp_buffer_addr(raw_buffer) != NULL); odp_buffer_print(raw_buffer); CU_ASSERT(odp_buffer_to_u64(raw_buffer) !=
commit 01ea9db19e2eb2f978d4fd22b1e341a741bb1e9c Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:32 2016 +0200
linux-gen: pool: ptr instead of hdl in buffer_alloc_multi
Improve performance by changing the first parameter of buffer_alloc_multi() to pool pointer (from handle), to avoid double lookup of the pool pointer. Pointer is available for packet alloc calls already.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 64ba221..0ca13f8 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -105,10 +105,6 @@ struct odp_buffer_hdr_t { };
/* Forward declarations */ -int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], - odp_buffer_hdr_t *buf_hdr[], int num); -void buffer_free_multi(const odp_buffer_t buf[], int num_free); - int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount); void seg_free_head(odp_buffer_hdr_t *buf_hdr, int segcount); int seg_alloc_tail(odp_buffer_hdr_t *buf_hdr, int segcount); diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index f7c315c..f7e951a 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -109,6 +109,10 @@ static inline odp_buffer_hdr_t *buf_hdl_to_hdr(odp_buffer_t buf) return buf_hdr; }
+int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], + odp_buffer_hdr_t *buf_hdr[], int num); +void buffer_free_multi(const odp_buffer_t buf[], int num_free); + uint32_t pool_headroom(odp_pool_t pool); uint32_t pool_tailroom(odp_pool_t pool);
diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index c44f687..2eee775 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -84,7 +84,7 @@ int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, int num, i; odp_packet_hdr_t *pkt_hdrs[max_num];
- num = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, + num = buffer_alloc_multi(pool, (odp_buffer_t *)pkt, (odp_buffer_hdr_t **)pkt_hdrs, max_num);
for (i = 0; i < num; i++) { @@ -115,7 +115,7 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) if (odp_unlikely(len > pool->max_len)) return ODP_PACKET_INVALID;
- ret = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)&pkt, NULL, 1); + ret = buffer_alloc_multi(pool, (odp_buffer_t *)&pkt, NULL, 1); if (ret != 1) return ODP_PACKET_INVALID;
@@ -146,7 +146,7 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, if (odp_unlikely(len > pool->max_len)) return -1;
- count = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, + count = buffer_alloc_multi(pool, (odp_buffer_t *)pkt, (odp_buffer_hdr_t **)pkt_hdrs, num);
for (i = 0; i < count; ++i) { diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index faea2fc..364df97 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -528,19 +528,17 @@ int odp_pool_info(odp_pool_t pool_hdl, odp_pool_info_t *info) return 0; }
-int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], +int buffer_alloc_multi(pool_t *pool, odp_buffer_t buf[], odp_buffer_hdr_t *buf_hdr[], int max_num) { - pool_t *pool; ring_t *ring; uint32_t mask, i; pool_cache_t *cache; uint32_t cache_num, num_ch, num_deq, burst;
- pool = pool_entry_from_hdl(pool_hdl); ring = &pool->ring.hdr; mask = pool->ring_mask; - cache = local.cache[_odp_typeval(pool_hdl)]; + cache = local.cache[pool->pool_idx];
cache_num = cache->num; num_ch = max_num; @@ -696,9 +694,11 @@ void buffer_free_multi(const odp_buffer_t buf[], int num_total) odp_buffer_t odp_buffer_alloc(odp_pool_t pool_hdl) { odp_buffer_t buf; + pool_t *pool; int ret;
- ret = buffer_alloc_multi(pool_hdl, &buf, NULL, 1); + pool = pool_entry_from_hdl(pool_hdl); + ret = buffer_alloc_multi(pool, &buf, NULL, 1);
if (odp_likely(ret == 1)) return buf; @@ -708,7 +708,11 @@ odp_buffer_t odp_buffer_alloc(odp_pool_t pool_hdl)
int odp_buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int num) { - return buffer_alloc_multi(pool_hdl, buf, NULL, num); + pool_t *pool; + + pool = pool_entry_from_hdl(pool_hdl); + + return buffer_alloc_multi(pool, buf, NULL, num); }
void odp_buffer_free(odp_buffer_t buf)
commit 0a0e4a684f8e5420295eda3df4fece6361b4d797 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:31 2016 +0200
linux-gen: pool: clean up pool inlines functions
Removed odp_pool_to_entry(), which was a duplicate of pool_entry_from_hdl(). Renamed odp_buf_to_hdr() to buf_hdl_to_hdr(), which describes more accurately the internal function. Inlined pool_entry(), pool_entry_from_hdl() and buf_hdl_to_hdr(), which are used often and also outside of pool.c. Renamed odp_buffer_pool_headroom() and _tailroom() to simply pool_headroom() and _tailroom(), since those are internal functions (not API as previous names hint). Also moved those into pool.c, since inlining is not needed for functions that are called only in (netmap) init phase.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h index 2f5eb88..f8688f6 100644 --- a/platform/linux-generic/include/odp_buffer_inlines.h +++ b/platform/linux-generic/include/odp_buffer_inlines.h @@ -31,8 +31,6 @@ static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) return hdr->handle.handle; }
-odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf); - static inline uint32_t pool_id_from_buf(odp_buffer_t buf) { odp_buffer_bits_t handle; diff --git a/platform/linux-generic/include/odp_packet_internal.h b/platform/linux-generic/include/odp_packet_internal.h index 2cad71f..0cdd5ca 100644 --- a/platform/linux-generic/include/odp_packet_internal.h +++ b/platform/linux-generic/include/odp_packet_internal.h @@ -199,7 +199,7 @@ typedef struct { */ static inline odp_packet_hdr_t *odp_packet_hdr(odp_packet_t pkt) { - return (odp_packet_hdr_t *)odp_buf_to_hdr((odp_buffer_t)pkt); + return (odp_packet_hdr_t *)buf_hdl_to_hdr((odp_buffer_t)pkt); }
static inline void copy_packet_parser_metadata(odp_packet_hdr_t *src_hdr, diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index 278c553..f7c315c 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -73,23 +73,45 @@ typedef struct pool_t {
} pool_t;
-pool_t *pool_entry(uint32_t pool_idx); +typedef struct pool_table_t { + pool_t pool[ODP_CONFIG_POOLS]; + odp_shm_t shm; +} pool_table_t;
-static inline pool_t *odp_pool_to_entry(odp_pool_t pool_hdl) +extern pool_table_t *pool_tbl; + +static inline pool_t *pool_entry(uint32_t pool_idx) { - return pool_entry(_odp_typeval(pool_hdl)); + return &pool_tbl->pool[pool_idx]; }
-static inline uint32_t odp_buffer_pool_headroom(odp_pool_t pool) +static inline pool_t *pool_entry_from_hdl(odp_pool_t pool_hdl) { - return odp_pool_to_entry(pool)->headroom; + return &pool_tbl->pool[_odp_typeval(pool_hdl)]; }
-static inline uint32_t odp_buffer_pool_tailroom(odp_pool_t pool) +static inline odp_buffer_hdr_t *buf_hdl_to_hdr(odp_buffer_t buf) { - return odp_pool_to_entry(pool)->tailroom; + odp_buffer_bits_t handle; + uint32_t pool_id, index, block_offset; + pool_t *pool; + odp_buffer_hdr_t *buf_hdr; + + handle.handle = buf; + pool_id = handle.pool_id; + index = handle.index; + pool = pool_entry(pool_id); + block_offset = index * pool->block_size; + + /* clang requires cast to uintptr_t */ + buf_hdr = (odp_buffer_hdr_t *)(uintptr_t)&pool->base_addr[block_offset]; + + return buf_hdr; }
+uint32_t pool_headroom(odp_pool_t pool); +uint32_t pool_tailroom(odp_pool_t pool); + #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/odp_buffer.c b/platform/linux-generic/odp_buffer.c index 0ddaf95..eed15c0 100644 --- a/platform/linux-generic/odp_buffer.c +++ b/platform/linux-generic/odp_buffer.c @@ -26,14 +26,14 @@ odp_event_t odp_buffer_to_event(odp_buffer_t buf)
void *odp_buffer_addr(odp_buffer_t buf) { - odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); + odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf);
return hdr->addr[0]; }
uint32_t odp_buffer_size(odp_buffer_t buf) { - odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); + odp_buffer_hdr_t *hdr = buf_hdl_to_hdr(buf);
return hdr->size; } @@ -48,7 +48,7 @@ int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf) return len; }
- hdr = odp_buf_to_hdr(buf); + hdr = buf_hdl_to_hdr(buf);
len += snprintf(&str[len], n-len, "Buffer\n"); diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 6565a5d..c44f687 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -80,7 +80,7 @@ static void packet_init(pool_t *pool, odp_packet_hdr_t *pkt_hdr, int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, odp_packet_t pkt[], int max_num) { - pool_t *pool = odp_pool_to_entry(pool_hdl); + pool_t *pool = pool_entry_from_hdl(pool_hdl); int num, i; odp_packet_hdr_t *pkt_hdrs[max_num];
@@ -101,7 +101,7 @@ int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len,
odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) { - pool_t *pool = odp_pool_to_entry(pool_hdl); + pool_t *pool = pool_entry_from_hdl(pool_hdl); size_t pkt_size = len ? len : pool->data_size; odp_packet_t pkt; odp_packet_hdr_t *pkt_hdr; @@ -133,7 +133,7 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, odp_packet_t pkt[], int num) { - pool_t *pool = odp_pool_to_entry(pool_hdl); + pool_t *pool = pool_entry_from_hdl(pool_hdl); size_t pkt_size = len ? len : pool->data_size; int count, i; odp_packet_hdr_t *pkt_hdrs[num]; @@ -176,7 +176,7 @@ void odp_packet_free_multi(const odp_packet_t pkt[], int num) int odp_packet_reset(odp_packet_t pkt, uint32_t len) { odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); - pool_t *pool = odp_pool_to_entry(pkt_hdr->buf_hdr.pool_hdl); + pool_t *pool = pool_entry_from_hdl(pkt_hdr->buf_hdr.pool_hdl);
if (len > pool->headroom + pool->data_size + pool->tailroom) return -1; diff --git a/platform/linux-generic/odp_packet_io.c b/platform/linux-generic/odp_packet_io.c index 3524ff8..7566789 100644 --- a/platform/linux-generic/odp_packet_io.c +++ b/platform/linux-generic/odp_packet_io.c @@ -563,7 +563,7 @@ static inline int pktin_recv_buf(odp_pktin_queue_t queue, pkt = packets[i]; pkt_hdr = odp_packet_hdr(pkt); buf = _odp_packet_to_buffer(pkt); - buf_hdr = odp_buf_to_hdr(buf); + buf_hdr = buf_hdl_to_hdr(buf);
if (pkt_hdr->p.input_flags.dst_queue) { queue_entry_t *dst_queue; diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 7dc0938..faea2fc 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -32,18 +32,13 @@ ODP_STATIC_ASSERT(CONFIG_POOL_CACHE_SIZE > (2 * CACHE_BURST), "cache_burst_size_too_large_compared_to_cache_size");
-typedef struct pool_table_t { - pool_t pool[ODP_CONFIG_POOLS]; - odp_shm_t shm; -} pool_table_t; - /* Thread local variables */ typedef struct pool_local_t { pool_cache_t *cache[ODP_CONFIG_POOLS]; int thr_id; } pool_local_t;
-static pool_table_t *pool_tbl; +pool_table_t *pool_tbl; static __thread pool_local_t local;
static inline odp_pool_t pool_index_to_handle(uint32_t pool_idx) @@ -51,16 +46,6 @@ static inline odp_pool_t pool_index_to_handle(uint32_t pool_idx) return _odp_cast_scalar(odp_pool_t, pool_idx); }
-pool_t *pool_entry(uint32_t pool_idx) -{ - return &pool_tbl->pool[pool_idx]; -} - -static inline pool_t *pool_entry_from_hdl(odp_pool_t pool_hdl) -{ - return &pool_tbl->pool[_odp_typeval(pool_hdl)]; -} - int odp_pool_init_global(void) { uint32_t i; @@ -475,33 +460,14 @@ int odp_pool_destroy(odp_pool_t pool_hdl) return 0; }
-odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) -{ - odp_buffer_bits_t handle; - uint32_t pool_id, index, block_offset; - pool_t *pool; - odp_buffer_hdr_t *buf_hdr; - - handle.handle = buf; - pool_id = handle.pool_id; - index = handle.index; - pool = pool_entry(pool_id); - block_offset = index * pool->block_size; - - /* clang requires cast to uintptr_t */ - buf_hdr = (odp_buffer_hdr_t *)(uintptr_t)&pool->base_addr[block_offset]; - - return buf_hdr; -} - odp_event_type_t _odp_buffer_event_type(odp_buffer_t buf) { - return odp_buf_to_hdr(buf)->event_type; + return buf_hdl_to_hdr(buf)->event_type; }
void _odp_buffer_event_type_set(odp_buffer_t buf, int ev) { - odp_buf_to_hdr(buf)->event_type = ev; + buf_hdl_to_hdr(buf)->event_type = ev; }
void *buffer_map(odp_buffer_hdr_t *buf, @@ -614,7 +580,7 @@ int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], buf[idx] = (odp_buffer_t)(uintptr_t)data[i];
if (buf_hdr) { - buf_hdr[idx] = odp_buf_to_hdr(buf[idx]); + buf_hdr[idx] = buf_hdl_to_hdr(buf[idx]); /* Prefetch newly allocated and soon to be used * buffer headers. */ odp_prefetch(buf_hdr[idx]); @@ -633,7 +599,7 @@ int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[],
if (buf_hdr) { for (i = 0; i < num_ch; i++) - buf_hdr[i] = odp_buf_to_hdr(buf[i]); + buf_hdr[i] = buf_hdl_to_hdr(buf[i]); }
return num_ch + num_deq; @@ -885,3 +851,13 @@ int odp_buffer_is_valid(odp_buffer_t buf)
return 1; } + +uint32_t pool_headroom(odp_pool_t pool) +{ + return pool_entry_from_hdl(pool)->headroom; +} + +uint32_t pool_tailroom(odp_pool_t pool) +{ + return pool_entry_from_hdl(pool)->tailroom; +} diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 6bf1629..43e212a 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -483,7 +483,7 @@ int odp_queue_enq_multi(odp_queue_t handle, const odp_event_t ev[], int num) queue = queue_to_qentry(handle);
for (i = 0; i < num; i++) - buf_hdr[i] = odp_buf_to_hdr(odp_buffer_from_event(ev[i])); + buf_hdr[i] = buf_hdl_to_hdr(odp_buffer_from_event(ev[i]));
return num == 0 ? 0 : queue->s.enqueue_multi(queue, buf_hdr, num, SUSTAIN_ORDER); @@ -495,7 +495,7 @@ int odp_queue_enq(odp_queue_t handle, odp_event_t ev) queue_entry_t *queue;
queue = queue_to_qentry(handle); - buf_hdr = odp_buf_to_hdr(odp_buffer_from_event(ev)); + buf_hdr = buf_hdl_to_hdr(odp_buffer_from_event(ev));
/* No chains via this entry */ buf_hdr->link = NULL; diff --git a/platform/linux-generic/odp_schedule_ordered.c b/platform/linux-generic/odp_schedule_ordered.c index 8412183..5574faf 100644 --- a/platform/linux-generic/odp_schedule_ordered.c +++ b/platform/linux-generic/odp_schedule_ordered.c @@ -749,7 +749,7 @@ int release_order(void *origin_qe_ptr, uint64_t order, return -1; }
- placeholder_buf_hdr = odp_buf_to_hdr(placeholder_buf); + placeholder_buf_hdr = buf_hdl_to_hdr(placeholder_buf);
/* Copy info to placeholder and add it to the reorder queue */ placeholder_buf_hdr->origin_qe = origin_qe; @@ -805,7 +805,7 @@ void cache_order_info(uint32_t queue_index) uint32_t i; queue_entry_t *qe = get_qentry(queue_index); odp_event_t ev = sched_local.ev_stash[0]; - odp_buffer_hdr_t *buf_hdr = odp_buf_to_hdr(odp_buffer_from_event(ev)); + odp_buffer_hdr_t *buf_hdr = buf_hdl_to_hdr(odp_buffer_from_event(ev));
sched_local.origin_qe = qe; sched_local.order = buf_hdr->order; diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c index 90ff1fe..53fec08 100644 --- a/platform/linux-generic/odp_timer.c +++ b/platform/linux-generic/odp_timer.c @@ -76,7 +76,7 @@ static _odp_atomic_flag_t locks[NUM_LOCKS]; /* Multiple locks per cache line! */
static odp_timeout_hdr_t *timeout_hdr_from_buf(odp_buffer_t buf) { - return (odp_timeout_hdr_t *)(void *)odp_buf_to_hdr(buf); + return (odp_timeout_hdr_t *)(void *)buf_hdl_to_hdr(buf); }
static odp_timeout_hdr_t *timeout_hdr(odp_timeout_t tmo) diff --git a/platform/linux-generic/pktio/loop.c b/platform/linux-generic/pktio/loop.c index 21d7542..28dd404 100644 --- a/platform/linux-generic/pktio/loop.c +++ b/platform/linux-generic/pktio/loop.c @@ -162,7 +162,7 @@ static int loopback_send(pktio_entry_t *pktio_entry, int index ODP_UNUSED, len = QUEUE_MULTI_MAX;
for (i = 0; i < len; ++i) { - hdr_tbl[i] = odp_buf_to_hdr(_odp_packet_to_buffer(pkt_tbl[i])); + hdr_tbl[i] = buf_hdl_to_hdr(_odp_packet_to_buffer(pkt_tbl[i])); bytes += odp_packet_len(pkt_tbl[i]); }
diff --git a/platform/linux-generic/pktio/netmap.c b/platform/linux-generic/pktio/netmap.c index c1cdf72..cf67741 100644 --- a/platform/linux-generic/pktio/netmap.c +++ b/platform/linux-generic/pktio/netmap.c @@ -346,8 +346,8 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED, pktio_entry_t *pktio_entry,
/* max frame len taking into account the l2-offset */ pkt_nm->max_frame_len = ODP_CONFIG_PACKET_BUF_LEN_MAX - - odp_buffer_pool_headroom(pool) - - odp_buffer_pool_tailroom(pool); + pool_headroom(pool) - + pool_tailroom(pool);
/* allow interface to be opened with or without the 'netmap:' prefix */ prefix = "netmap:"; diff --git a/platform/linux-generic/pktio/socket_mmap.c b/platform/linux-generic/pktio/socket_mmap.c index bf4402a..666aae6 100644 --- a/platform/linux-generic/pktio/socket_mmap.c +++ b/platform/linux-generic/pktio/socket_mmap.c @@ -351,7 +351,7 @@ static void mmap_fill_ring(struct ring *ring, odp_pool_t pool_hdl, int fanout) if (pool_hdl == ODP_POOL_INVALID) ODP_ABORT("Invalid pool handle\n");
- pool = odp_pool_to_entry(pool_hdl); + pool = pool_entry_from_hdl(pool_hdl);
/* Frame has to capture full packet which can fit to the pool block.*/ ring->req.tp_frame_size = (pool->data_size +
commit c6dc829d0c6a54a08756e13e2f3388f0bda61245 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:30 2016 +0200
linux-gen: pool: optimize buffer alloc
Round up global pool allocations to a burst size. Cache any extra buffers for future use. Prefetch buffers header which very newly allocated from global pool and will be returned to the caller.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index abe8591..64ba221 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -105,7 +105,8 @@ struct odp_buffer_hdr_t { };
/* Forward declarations */ -int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int num); +int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], + odp_buffer_hdr_t *buf_hdr[], int num); void buffer_free_multi(const odp_buffer_t buf[], int num_free);
int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount); diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index 6df1c5b..6565a5d 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -80,14 +80,16 @@ static void packet_init(pool_t *pool, odp_packet_hdr_t *pkt_hdr, int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, odp_packet_t pkt[], int max_num) { - odp_packet_hdr_t *pkt_hdr; pool_t *pool = odp_pool_to_entry(pool_hdl); int num, i; + odp_packet_hdr_t *pkt_hdrs[max_num];
- num = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, max_num); + num = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, + (odp_buffer_hdr_t **)pkt_hdrs, max_num);
for (i = 0; i < num; i++) { - pkt_hdr = odp_packet_hdr(pkt[i]); + odp_packet_hdr_t *pkt_hdr = pkt_hdrs[i]; + packet_init(pool, pkt_hdr, len, 1 /* do parse */);
if (pkt_hdr->tailroom >= pkt_hdr->buf_hdr.segsize) @@ -113,7 +115,7 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) if (odp_unlikely(len > pool->max_len)) return ODP_PACKET_INVALID;
- ret = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)&pkt, 1); + ret = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)&pkt, NULL, 1); if (ret != 1) return ODP_PACKET_INVALID;
@@ -134,6 +136,7 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, pool_t *pool = odp_pool_to_entry(pool_hdl); size_t pkt_size = len ? len : pool->data_size; int count, i; + odp_packet_hdr_t *pkt_hdrs[num];
if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) { __odp_errno = EINVAL; @@ -143,10 +146,11 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, if (odp_unlikely(len > pool->max_len)) return -1;
- count = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, num); + count = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, + (odp_buffer_hdr_t **)pkt_hdrs, num);
for (i = 0; i < count; ++i) { - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt[i]); + odp_packet_hdr_t *pkt_hdr = pkt_hdrs[i];
packet_init(pool, pkt_hdr, pkt_size, 0 /* do not parse */); if (len == 0) diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index a2e5d54..7dc0938 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -562,14 +562,14 @@ int odp_pool_info(odp_pool_t pool_hdl, odp_pool_info_t *info) return 0; }
-int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int max_num) +int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], + odp_buffer_hdr_t *buf_hdr[], int max_num) { pool_t *pool; ring_t *ring; - uint32_t mask; - int i; + uint32_t mask, i; pool_cache_t *cache; - uint32_t cache_num; + uint32_t cache_num, num_ch, num_deq, burst;
pool = pool_entry_from_hdl(pool_hdl); ring = &pool->ring.hdr; @@ -577,28 +577,66 @@ int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int max_num) cache = local.cache[_odp_typeval(pool_hdl)];
cache_num = cache->num; + num_ch = max_num; + num_deq = 0; + burst = CACHE_BURST;
- if (odp_likely((int)cache_num >= max_num)) { - for (i = 0; i < max_num; i++) - buf[i] = cache->buf[cache_num - max_num + i]; + if (odp_unlikely(cache_num < (uint32_t)max_num)) { + /* Cache does not have enough buffers */ + num_ch = cache_num; + num_deq = max_num - cache_num;
- cache->num = cache_num - max_num; - return max_num; + if (odp_unlikely(num_deq > CACHE_BURST)) + burst = num_deq; }
- { + /* Get buffers from the cache */ + for (i = 0; i < num_ch; i++) + buf[i] = cache->buf[cache_num - num_ch + i]; + + /* If needed, get more from the global pool */ + if (odp_unlikely(num_deq)) { /* Temporary copy needed since odp_buffer_t is uintptr_t * and not uint32_t. */ - int num; - uint32_t data[max_num]; + uint32_t data[burst];
- num = ring_deq_multi(ring, mask, data, max_num); + burst = ring_deq_multi(ring, mask, data, burst); + cache_num = burst - num_deq;
- for (i = 0; i < num; i++) - buf[i] = (odp_buffer_t)(uintptr_t)data[i]; + if (odp_unlikely(burst < num_deq)) { + num_deq = burst; + cache_num = 0; + } + + for (i = 0; i < num_deq; i++) { + uint32_t idx = num_ch + i; + + buf[idx] = (odp_buffer_t)(uintptr_t)data[i]; + + if (buf_hdr) { + buf_hdr[idx] = odp_buf_to_hdr(buf[idx]); + /* Prefetch newly allocated and soon to be used + * buffer headers. */ + odp_prefetch(buf_hdr[idx]); + } + } + + /* Cache extra buffers. Cache is currently empty. */ + for (i = 0; i < cache_num; i++) + cache->buf[i] = (odp_buffer_t) + (uintptr_t)data[num_deq + i]; + + cache->num = cache_num; + } else { + cache->num = cache_num - num_ch; + } + + if (buf_hdr) { + for (i = 0; i < num_ch; i++) + buf_hdr[i] = odp_buf_to_hdr(buf[i]); }
- return i; + return num_ch + num_deq; }
static inline void buffer_free_to_pool(uint32_t pool_id, @@ -694,7 +732,7 @@ odp_buffer_t odp_buffer_alloc(odp_pool_t pool_hdl) odp_buffer_t buf; int ret;
- ret = buffer_alloc_multi(pool_hdl, &buf, 1); + ret = buffer_alloc_multi(pool_hdl, &buf, NULL, 1);
if (odp_likely(ret == 1)) return buf; @@ -704,7 +742,7 @@ odp_buffer_t odp_buffer_alloc(odp_pool_t pool_hdl)
int odp_buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int num) { - return buffer_alloc_multi(pool_hdl, buf, num); + return buffer_alloc_multi(pool_hdl, buf, NULL, num); }
void odp_buffer_free(odp_buffer_t buf)
commit a296693d3dbbe98a5616406d6535dee85cbd31ba Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:29 2016 +0200
linux-gen: pool: use ring multi enq and deq operations
Use multi enq and deq operations to optimize global pool access performance. Temporary uint32_t arrays are needed since handles are pointer size variables.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 1286753..a2e5d54 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -586,15 +586,16 @@ int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int max_num) return max_num; }
- for (i = 0; i < max_num; i++) { - uint32_t data; + { + /* Temporary copy needed since odp_buffer_t is uintptr_t + * and not uint32_t. */ + int num; + uint32_t data[max_num];
- data = ring_deq(ring, mask); + num = ring_deq_multi(ring, mask, data, max_num);
- if (data == RING_EMPTY) - break; - - buf[i] = (odp_buffer_t)(uintptr_t)data; + for (i = 0; i < num; i++) + buf[i] = (odp_buffer_t)(uintptr_t)data[i]; }
return i; @@ -629,17 +630,24 @@ static inline void buffer_free_to_pool(uint32_t pool_id, cache_num = cache->num;
if (odp_unlikely((int)(CONFIG_POOL_CACHE_SIZE - cache_num) < num)) { + uint32_t index; int burst = CACHE_BURST;
if (odp_unlikely(num > CACHE_BURST)) burst = num;
- for (i = 0; i < burst; i++) { - uint32_t data, index; + { + /* Temporary copy needed since odp_buffer_t is + * uintptr_t and not uint32_t. */ + uint32_t data[burst]; + + index = cache_num - burst; + + for (i = 0; i < burst; i++) + data[i] = (uint32_t) + (uintptr_t)cache->buf[index + i];
- index = cache_num - burst + i; - data = (uint32_t)(uintptr_t)cache->buf[index]; - ring_enq(ring, mask, data); + ring_enq_multi(ring, mask, data, burst); }
cache_num -= burst;
commit ec7a8e0fabe2269cf824ab809ad52a8763739be1 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:28 2016 +0200
linux-gen: ring: added multi enq and deq
Added multi-data versions of ring enqueue and dequeue operations.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_ring_internal.h b/platform/linux-generic/include/odp_ring_internal.h index 6a6291a..55fedeb 100644 --- a/platform/linux-generic/include/odp_ring_internal.h +++ b/platform/linux-generic/include/odp_ring_internal.h @@ -80,6 +80,45 @@ static inline uint32_t ring_deq(ring_t *ring, uint32_t mask) return data; }
+/* Dequeue multiple data from the ring head. Num is smaller than ring size. */ +static inline uint32_t ring_deq_multi(ring_t *ring, uint32_t mask, + uint32_t data[], uint32_t num) +{ + uint32_t head, tail, new_head, i; + + head = odp_atomic_load_u32(&ring->r_head); + + /* Move reader head. This thread owns data at the new head. */ + do { + tail = odp_atomic_load_u32(&ring->w_tail); + + /* Ring is empty */ + if (head == tail) + return 0; + + /* Try to take all available */ + if ((tail - head) < num) + num = tail - head; + + new_head = head + num; + + } while (odp_unlikely(odp_atomic_cas_acq_u32(&ring->r_head, &head, + new_head) == 0)); + + /* Read queue index */ + for (i = 0; i < num; i++) + data[i] = ring->data[(head + 1 + i) & mask]; + + /* Wait until other readers have updated the tail */ + while (odp_unlikely(odp_atomic_load_acq_u32(&ring->r_tail) != head)) + odp_cpu_pause(); + + /* Now update the reader tail */ + odp_atomic_store_rel_u32(&ring->r_tail, new_head); + + return num; +} + /* Enqueue data into the ring tail */ static inline void ring_enq(ring_t *ring, uint32_t mask, uint32_t data) { @@ -104,6 +143,32 @@ static inline void ring_enq(ring_t *ring, uint32_t mask, uint32_t data) odp_atomic_store_rel_u32(&ring->w_tail, new_head); }
+/* Enqueue multiple data into the ring tail. Num is smaller than ring size. */ +static inline void ring_enq_multi(ring_t *ring, uint32_t mask, uint32_t data[], + uint32_t num) +{ + uint32_t old_head, new_head, i; + + /* Reserve a slot in the ring for writing */ + old_head = odp_atomic_fetch_add_u32(&ring->w_head, num); + new_head = old_head + 1; + + /* Ring is full. Wait for the last reader to finish. */ + while (odp_unlikely(odp_atomic_load_acq_u32(&ring->r_tail) == new_head)) + odp_cpu_pause(); + + /* Write data */ + for (i = 0; i < num; i++) + ring->data[(new_head + i) & mask] = data[i]; + + /* Wait until other writers have updated the tail */ + while (odp_unlikely(odp_atomic_load_acq_u32(&ring->w_tail) != old_head)) + odp_cpu_pause(); + + /* Now update the writer tail */ + odp_atomic_store_rel_u32(&ring->w_tail, old_head + num); +} + #ifdef __cplusplus } #endif
commit 02c46a3a671bca6de5159a59be45663bca516753 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:27 2016 +0200
linux-gen: pool: reimplement pool with ring
Used the ring data structure to implement pool. Also buffer structure was simplified to enable future driver interface. Every buffer includes a packet header, so each buffer can be used as a packet head or segment. Segmentation was disabled and segment size was fixed to a large number (64kB) to limit the number of modification in the commit.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp/api/plat/pool_types.h b/platform/linux-generic/include/odp/api/plat/pool_types.h index 1ca8f02..4e39de5 100644 --- a/platform/linux-generic/include/odp/api/plat/pool_types.h +++ b/platform/linux-generic/include/odp/api/plat/pool_types.h @@ -39,12 +39,6 @@ typedef enum odp_pool_type_t { ODP_POOL_TIMEOUT = ODP_EVENT_TIMEOUT, } odp_pool_type_t;
-/** Get printable format of odp_pool_t */ -static inline uint64_t odp_pool_to_u64(odp_pool_t hdl) -{ - return _odp_pri(hdl); -} - /** * @} */ diff --git a/platform/linux-generic/include/odp_buffer_inlines.h b/platform/linux-generic/include/odp_buffer_inlines.h index 2b1ab42..2f5eb88 100644 --- a/platform/linux-generic/include/odp_buffer_inlines.h +++ b/platform/linux-generic/include/odp_buffer_inlines.h @@ -18,43 +18,20 @@ extern "C" { #endif
#include <odp_buffer_internal.h> -#include <odp_pool_internal.h>
-static inline odp_buffer_t odp_buffer_encode_handle(odp_buffer_hdr_t *hdr) -{ - odp_buffer_bits_t handle; - uint32_t pool_id = pool_handle_to_index(hdr->pool_hdl); - struct pool_entry_s *pool = get_pool_entry(pool_id); +odp_event_type_t _odp_buffer_event_type(odp_buffer_t buf); +void _odp_buffer_event_type_set(odp_buffer_t buf, int ev); +int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf);
- handle.handle = 0; - handle.pool_id = pool_id; - handle.index = ((uint8_t *)hdr - pool->pool_mdata_addr) / - ODP_CACHE_LINE_SIZE; - handle.seg = 0; - - return handle.handle; -} +void *buffer_map(odp_buffer_hdr_t *buf, uint32_t offset, uint32_t *seglen, + uint32_t limit);
static inline odp_buffer_t odp_hdr_to_buf(odp_buffer_hdr_t *hdr) { return hdr->handle.handle; }
-static inline odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) -{ - odp_buffer_bits_t handle; - uint32_t pool_id; - uint32_t index; - struct pool_entry_s *pool; - - handle.handle = buf; - pool_id = handle.pool_id; - index = handle.index; - pool = get_pool_entry(pool_id); - - return (odp_buffer_hdr_t *)(void *) - (pool->pool_mdata_addr + (index * ODP_CACHE_LINE_SIZE)); -} +odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf);
static inline uint32_t pool_id_from_buf(odp_buffer_t buf) { @@ -64,131 +41,6 @@ static inline uint32_t pool_id_from_buf(odp_buffer_t buf) return handle.pool_id; }
-static inline odp_buffer_hdr_t *validate_buf(odp_buffer_t buf) -{ - odp_buffer_bits_t handle; - odp_buffer_hdr_t *buf_hdr; - handle.handle = buf; - - /* For buffer handles, segment index must be 0 and pool id in range */ - if (handle.seg != 0 || handle.pool_id >= ODP_CONFIG_POOLS) - return NULL; - - pool_entry_t *pool = - odp_pool_to_entry(_odp_cast_scalar(odp_pool_t, - handle.pool_id)); - - /* If pool not created, handle is invalid */ - if (pool->s.pool_shm == ODP_SHM_INVALID) - return NULL; - - uint32_t buf_stride = pool->s.buf_stride / ODP_CACHE_LINE_SIZE; - - /* A valid buffer index must be on stride, and must be in range */ - if ((handle.index % buf_stride != 0) || - ((uint32_t)(handle.index / buf_stride) >= pool->s.params.buf.num)) - return NULL; - - buf_hdr = (odp_buffer_hdr_t *)(void *) - (pool->s.pool_mdata_addr + - (handle.index * ODP_CACHE_LINE_SIZE)); - - /* Handle is valid, so buffer is valid if it is allocated */ - return buf_hdr->allocator == ODP_FREEBUF ? NULL : buf_hdr; -} - -int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf); - -static inline void *buffer_map(odp_buffer_hdr_t *buf, - uint32_t offset, - uint32_t *seglen, - uint32_t limit) -{ - int seg_index; - int seg_offset; - - if (odp_likely(offset < buf->segsize)) { - seg_index = 0; - seg_offset = offset; - } else { - seg_index = offset / buf->segsize; - seg_offset = offset % buf->segsize; - } - if (seglen != NULL) { - uint32_t buf_left = limit - offset; - *seglen = seg_offset + buf_left <= buf->segsize ? - buf_left : buf->segsize - seg_offset; - } - - return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); -} - -static inline odp_buffer_seg_t segment_next(odp_buffer_hdr_t *buf, - odp_buffer_seg_t seg) -{ - odp_buffer_bits_t seghandle; - seghandle.handle = (odp_buffer_t)seg; - - if (seg == ODP_SEGMENT_INVALID || - seghandle.prefix != buf->handle.prefix || - seghandle.seg >= buf->segcount - 1) - return ODP_SEGMENT_INVALID; - else { - seghandle.seg++; - return (odp_buffer_seg_t)seghandle.handle; - } -} - -static inline void *segment_map(odp_buffer_hdr_t *buf, - odp_buffer_seg_t seg, - uint32_t *seglen, - uint32_t limit, - uint32_t hr) -{ - uint32_t seg_offset, buf_left; - odp_buffer_bits_t seghandle; - uint8_t *seg_addr; - seghandle.handle = (odp_buffer_t)seg; - - if (seghandle.prefix != buf->handle.prefix || - seghandle.seg >= buf->segcount) - return NULL; - - seg_addr = (uint8_t *)buf->addr[seghandle.seg]; - seg_offset = seghandle.seg * buf->segsize; - limit += hr; - - /* Can't map this segment if it's nothing but headroom or tailroom */ - if (hr >= seg_offset + buf->segsize || seg_offset > limit) - return NULL; - - /* Adjust address & offset if this segment contains any headroom */ - if (hr > seg_offset) { - seg_addr += hr % buf->segsize; - seg_offset += hr % buf->segsize; - } - - /* Set seglen if caller is asking for it */ - if (seglen != NULL) { - buf_left = limit - seg_offset; - *seglen = buf_left < buf->segsize ? buf_left : - (seg_offset >= buf->segsize ? buf->segsize : - buf->segsize - seg_offset); - } - - return (void *)seg_addr; -} - -static inline odp_event_type_t _odp_buffer_event_type(odp_buffer_t buf) -{ - return odp_buf_to_hdr(buf)->event_type; -} - -static inline void _odp_buffer_event_type_set(odp_buffer_t buf, int ev) -{ - odp_buf_to_hdr(buf)->event_type = ev; -} - #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_buffer_internal.h b/platform/linux-generic/include/odp_buffer_internal.h index 1c09cd3..abe8591 100644 --- a/platform/linux-generic/include/odp_buffer_internal.h +++ b/platform/linux-generic/include/odp_buffer_internal.h @@ -33,72 +33,19 @@ extern "C" { #include <odp_schedule_if.h> #include <stddef.h>
-#define ODP_BITSIZE(x) \ - ((x) <= 2 ? 1 : \ - ((x) <= 4 ? 2 : \ - ((x) <= 8 ? 3 : \ - ((x) <= 16 ? 4 : \ - ((x) <= 32 ? 5 : \ - ((x) <= 64 ? 6 : \ - ((x) <= 128 ? 7 : \ - ((x) <= 256 ? 8 : \ - ((x) <= 512 ? 9 : \ - ((x) <= 1024 ? 10 : \ - ((x) <= 2048 ? 11 : \ - ((x) <= 4096 ? 12 : \ - ((x) <= 8196 ? 13 : \ - ((x) <= 16384 ? 14 : \ - ((x) <= 32768 ? 15 : \ - ((x) <= 65536 ? 16 : \ - (0/0))))))))))))))))) - ODP_STATIC_ASSERT(ODP_CONFIG_PACKET_SEG_LEN_MIN >= 256, "ODP Segment size must be a minimum of 256 bytes");
-ODP_STATIC_ASSERT((ODP_CONFIG_PACKET_BUF_LEN_MAX % - ODP_CONFIG_PACKET_SEG_LEN_MIN) == 0, - "Packet max size must be a multiple of segment size"); - -#define ODP_BUFFER_MAX_SEG \ - (ODP_CONFIG_PACKET_BUF_LEN_MAX / ODP_CONFIG_PACKET_SEG_LEN_MIN) - -/* We can optimize storage of small raw buffers within metadata area */ -#define ODP_MAX_INLINE_BUF ((sizeof(void *)) * (ODP_BUFFER_MAX_SEG - 1)) - -#define ODP_BUFFER_POOL_BITS ODP_BITSIZE(ODP_CONFIG_POOLS) -#define ODP_BUFFER_SEG_BITS ODP_BITSIZE(ODP_BUFFER_MAX_SEG) -#define ODP_BUFFER_INDEX_BITS (32 - ODP_BUFFER_POOL_BITS - ODP_BUFFER_SEG_BITS) -#define ODP_BUFFER_PREFIX_BITS (ODP_BUFFER_POOL_BITS + ODP_BUFFER_INDEX_BITS) -#define ODP_BUFFER_MAX_POOLS (1 << ODP_BUFFER_POOL_BITS) -#define ODP_BUFFER_MAX_BUFFERS (1 << ODP_BUFFER_INDEX_BITS) - -#define ODP_BUFFER_MAX_INDEX (ODP_BUFFER_MAX_BUFFERS - 2) -#define ODP_BUFFER_INVALID_INDEX (ODP_BUFFER_MAX_BUFFERS - 1)
typedef union odp_buffer_bits_t { - odp_buffer_t handle; + odp_buffer_t handle; + union { - uint32_t u32; - struct { -#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN - uint32_t pool_id:ODP_BUFFER_POOL_BITS; - uint32_t index:ODP_BUFFER_INDEX_BITS; - uint32_t seg:ODP_BUFFER_SEG_BITS; -#else - uint32_t seg:ODP_BUFFER_SEG_BITS; - uint32_t index:ODP_BUFFER_INDEX_BITS; - uint32_t pool_id:ODP_BUFFER_POOL_BITS; -#endif - }; + uint32_t u32;
struct { -#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN - uint32_t prefix:ODP_BUFFER_PREFIX_BITS; - uint32_t pfxseg:ODP_BUFFER_SEG_BITS; -#else - uint32_t pfxseg:ODP_BUFFER_SEG_BITS; - uint32_t prefix:ODP_BUFFER_PREFIX_BITS; -#endif + uint32_t pool_id: 8; + uint32_t index: 24; }; }; } odp_buffer_bits_t; @@ -125,7 +72,7 @@ struct odp_buffer_hdr_t { uint32_t sustain:1; /* Sustain order */ }; } flags; - int16_t allocator; /* allocating thread id */ + int8_t type; /* buffer type */ odp_event_type_t event_type; /* for reuse as event */ uint32_t size; /* max data size */ @@ -139,7 +86,8 @@ struct odp_buffer_hdr_t { uint32_t uarea_size; /* size of user area */ uint32_t segcount; /* segment count */ uint32_t segsize; /* segment size */ - void *addr[ODP_BUFFER_MAX_SEG]; /* block addrs */ + /* block addrs */ + void *addr[ODP_CONFIG_PACKET_MAX_SEGS]; uint64_t order; /* sequence for ordered queues */ queue_entry_t *origin_qe; /* ordered queue origin */ union { @@ -149,39 +97,17 @@ struct odp_buffer_hdr_t { #ifdef _ODP_PKTIO_IPC /* ipc mapped process can not walk over pointers, * offset has to be used */ - uint64_t ipc_addr_offset[ODP_BUFFER_MAX_SEG]; + uint64_t ipc_addr_offset[ODP_CONFIG_PACKET_MAX_SEGS]; #endif -}; - -/** @internal Compile time assert that the - * allocator field can handle any allocator id*/ -ODP_STATIC_ASSERT(INT16_MAX >= ODP_THREAD_COUNT_MAX, - "ODP_BUFFER_HDR_T__ALLOCATOR__SIZE_ERROR"); - -typedef struct odp_buffer_hdr_stride { - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_buffer_hdr_t))]; -} odp_buffer_hdr_stride;
-typedef struct odp_buf_blk_t { - struct odp_buf_blk_t *next; - struct odp_buf_blk_t *prev; -} odp_buf_blk_t; - -/* Raw buffer header */ -typedef struct { - odp_buffer_hdr_t buf_hdr; /* common buffer header */ -} odp_raw_buffer_hdr_t; - -/* Free buffer marker */ -#define ODP_FREEBUF -1 + /* Data or next header */ + uint8_t data[0]; +};
/* Forward declarations */ -odp_buffer_t buffer_alloc(odp_pool_t pool, size_t size); -int buffer_alloc_multi(odp_pool_t pool_hdl, size_t size, - odp_buffer_t buf[], int num); -void buffer_free(uint32_t pool_id, const odp_buffer_t buf); -void buffer_free_multi(uint32_t pool_id, - const odp_buffer_t buf[], int num_free); +int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int num); +void buffer_free_multi(const odp_buffer_t buf[], int num_free); + int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount); void seg_free_head(odp_buffer_hdr_t *buf_hdr, int segcount); int seg_alloc_tail(odp_buffer_hdr_t *buf_hdr, int segcount); diff --git a/platform/linux-generic/include/odp_classification_datamodel.h b/platform/linux-generic/include/odp_classification_datamodel.h index dc2190d..8505c67 100644 --- a/platform/linux-generic/include/odp_classification_datamodel.h +++ b/platform/linux-generic/include/odp_classification_datamodel.h @@ -77,7 +77,7 @@ Class Of Service */ struct cos_s { queue_entry_t *queue; /* Associated Queue */ - pool_entry_t *pool; /* Associated Buffer pool */ + odp_pool_t pool; /* Associated Buffer pool */ union pmr_u *pmr[ODP_PMR_PER_COS_MAX]; /* Chained PMR */ union cos_u *linked_cos[ODP_PMR_PER_COS_MAX]; /* Chained CoS with PMR*/ uint32_t valid; /* validity Flag */ diff --git a/platform/linux-generic/include/odp_config_internal.h b/platform/linux-generic/include/odp_config_internal.h index b7ff610..3fd1c93 100644 --- a/platform/linux-generic/include/odp_config_internal.h +++ b/platform/linux-generic/include/odp_config_internal.h @@ -32,7 +32,7 @@ extern "C" { * This defines the minimum supported buffer alignment. Requests for values * below this will be rounded up to this value. */ -#define ODP_CONFIG_BUFFER_ALIGN_MIN 16 +#define ODP_CONFIG_BUFFER_ALIGN_MIN 64
/* * Maximum buffer alignment @@ -70,16 +70,7 @@ extern "C" { /* * Maximum number of segments per packet */ -#define ODP_CONFIG_PACKET_MAX_SEGS 6 - -/* - * Minimum packet segment length - * - * This defines the minimum packet segment buffer length in bytes. The user - * defined segment length (seg_len in odp_pool_param_t) will be rounded up into - * this value. - */ -#define ODP_CONFIG_PACKET_SEG_LEN_MIN 1598 +#define ODP_CONFIG_PACKET_MAX_SEGS 1
/* * Maximum packet segment length @@ -91,6 +82,15 @@ extern "C" { #define ODP_CONFIG_PACKET_SEG_LEN_MAX (64 * 1024)
/* + * Minimum packet segment length + * + * This defines the minimum packet segment buffer length in bytes. The user + * defined segment length (seg_len in odp_pool_param_t) will be rounded up into + * this value. + */ +#define ODP_CONFIG_PACKET_SEG_LEN_MIN ODP_CONFIG_PACKET_SEG_LEN_MAX + +/* * Maximum packet buffer length * * This defines the maximum number of bytes that can be stored into a packet @@ -102,7 +102,7 @@ extern "C" { * - The value MUST be an integral number of segments * - The value SHOULD be large enough to accommodate jumbo packets (9K) */ -#define ODP_CONFIG_PACKET_BUF_LEN_MAX (ODP_CONFIG_PACKET_SEG_LEN_MIN * 6) +#define ODP_CONFIG_PACKET_BUF_LEN_MAX ODP_CONFIG_PACKET_SEG_LEN_MAX
/* Maximum number of shared memory blocks. * @@ -118,6 +118,16 @@ extern "C" { */ #define CONFIG_BURST_SIZE 16
+/* + * Maximum number of events in a pool + */ +#define CONFIG_POOL_MAX_NUM (1 * 1024 * 1024) + +/* + * Maximum number of events in a thread local pool cache + */ +#define CONFIG_POOL_CACHE_SIZE 256 + #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_packet_internal.h b/platform/linux-generic/include/odp_packet_internal.h index b23ad9c..2cad71f 100644 --- a/platform/linux-generic/include/odp_packet_internal.h +++ b/platform/linux-generic/include/odp_packet_internal.h @@ -189,11 +189,10 @@ typedef struct { odp_time_t timestamp; /**< Timestamp value */
odp_crypto_generic_op_result_t op_result; /**< Result for crypto */ -} odp_packet_hdr_t;
-typedef struct odp_packet_hdr_stride { - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_packet_hdr_t))]; -} odp_packet_hdr_stride; + /* Packet data storage */ + uint8_t data[0]; +} odp_packet_hdr_t;
/** * Return the packet header @@ -248,7 +247,8 @@ static inline int push_head_seg(odp_packet_hdr_t *pkt_hdr, size_t len) (len - pkt_hdr->headroom + pkt_hdr->buf_hdr.segsize - 1) / pkt_hdr->buf_hdr.segsize;
- if (pkt_hdr->buf_hdr.segcount + extrasegs > ODP_BUFFER_MAX_SEG || + if (pkt_hdr->buf_hdr.segcount + extrasegs > + ODP_CONFIG_PACKET_MAX_SEGS || seg_alloc_head(&pkt_hdr->buf_hdr, extrasegs)) return -1;
@@ -276,7 +276,8 @@ static inline int push_tail_seg(odp_packet_hdr_t *pkt_hdr, size_t len) (len - pkt_hdr->tailroom + pkt_hdr->buf_hdr.segsize - 1) / pkt_hdr->buf_hdr.segsize;
- if (pkt_hdr->buf_hdr.segcount + extrasegs > ODP_BUFFER_MAX_SEG || + if (pkt_hdr->buf_hdr.segcount + extrasegs > + ODP_CONFIG_PACKET_MAX_SEGS || seg_alloc_tail(&pkt_hdr->buf_hdr, extrasegs)) return -1;
diff --git a/platform/linux-generic/include/odp_pool_internal.h b/platform/linux-generic/include/odp_pool_internal.h index ca59ade..278c553 100644 --- a/platform/linux-generic/include/odp_pool_internal.h +++ b/platform/linux-generic/include/odp_pool_internal.h @@ -18,240 +18,78 @@ extern "C" { #endif
-#include <odp/api/std_types.h> -#include <odp/api/align.h> -#include <odp_align_internal.h> -#include <odp/api/pool.h> -#include <odp_buffer_internal.h> -#include <odp/api/hints.h> -#include <odp_config_internal.h> -#include <odp/api/debug.h> #include <odp/api/shared_memory.h> -#include <odp/api/atomic.h> -#include <odp/api/thread.h> -#include <string.h> - -/** - * Buffer initialization routine prototype - * - * @note Routines of this type MAY be passed as part of the - * _odp_buffer_pool_init_t structure to be called whenever a - * buffer is allocated to initialize the user metadata - * associated with that buffer. - */ -typedef void (_odp_buf_init_t)(odp_buffer_t buf, void *buf_init_arg); +#include <odp/api/ticketlock.h>
-/** - * Buffer pool initialization parameters - * Used to communicate buffer pool initialization options. Internal for now. - */ -typedef struct _odp_buffer_pool_init_t { - size_t udata_size; /**< Size of user metadata for each buffer */ - _odp_buf_init_t *buf_init; /**< Buffer initialization routine to use */ - void *buf_init_arg; /**< Argument to be passed to buf_init() */ -} _odp_buffer_pool_init_t; /**< Type of buffer initialization struct */ - -#define POOL_MAX_LOCAL_CHUNKS 4 -#define POOL_CHUNK_SIZE (4 * CONFIG_BURST_SIZE) -#define POOL_MAX_LOCAL_BUFS (POOL_MAX_LOCAL_CHUNKS * POOL_CHUNK_SIZE) - -struct local_cache_s { - uint64_t bufallocs; /* Local buffer alloc count */ - uint64_t buffrees; /* Local buffer free count */ - - uint32_t num_buf; - odp_buffer_hdr_t *buf[POOL_MAX_LOCAL_BUFS]; -}; +#include <odp_buffer_internal.h> +#include <odp_config_internal.h> +#include <odp_ring_internal.h>
-/* Local cache for buffer alloc/free acceleration */ -typedef struct local_cache_t { - union { - struct local_cache_s s; +typedef struct pool_cache_t { + uint32_t num;
- uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP( - sizeof(struct local_cache_s))]; - }; -} local_cache_t; + odp_buffer_t buf[CONFIG_POOL_CACHE_SIZE];
-#include <odp/api/plat/ticketlock_inlines.h> -#define POOL_LOCK(a) _odp_ticketlock_lock(a) -#define POOL_UNLOCK(a) _odp_ticketlock_unlock(a) -#define POOL_LOCK_INIT(a) odp_ticketlock_init(a) +} pool_cache_t ODP_ALIGNED_CACHE;
-/** - * ODP Pool stats - Maintain some useful stats regarding pool utilization - */ +/* Buffer header ring */ typedef struct { - odp_atomic_u64_t bufallocs; /**< Count of successful buf allocs */ - odp_atomic_u64_t buffrees; /**< Count of successful buf frees */ - odp_atomic_u64_t blkallocs; /**< Count of successful blk allocs */ - odp_atomic_u64_t blkfrees; /**< Count of successful blk frees */ - odp_atomic_u64_t bufempty; /**< Count of unsuccessful buf allocs */ - odp_atomic_u64_t blkempty; /**< Count of unsuccessful blk allocs */ - odp_atomic_u64_t buf_high_wm_count; /**< Count of high buf wm conditions */ - odp_atomic_u64_t buf_low_wm_count; /**< Count of low buf wm conditions */ - odp_atomic_u64_t blk_high_wm_count; /**< Count of high blk wm conditions */ - odp_atomic_u64_t blk_low_wm_count; /**< Count of low blk wm conditions */ -} _odp_pool_stats_t; - -struct pool_entry_s { - odp_ticketlock_t lock ODP_ALIGNED_CACHE; - odp_ticketlock_t buf_lock; - odp_ticketlock_t blk_lock; - - char name[ODP_POOL_NAME_LEN]; - odp_pool_param_t params; - uint32_t udata_size; - odp_pool_t pool_hdl; - uint32_t pool_id; - odp_shm_t pool_shm; - union { - uint32_t all; - struct { - uint32_t has_name:1; - uint32_t user_supplied_shm:1; - uint32_t unsegmented:1; - uint32_t zeroized:1; - uint32_t predefined:1; - }; - } flags; - uint32_t quiesced; - uint32_t buf_low_wm_assert; - uint32_t blk_low_wm_assert; - uint8_t *pool_base_addr; - uint8_t *pool_mdata_addr; - size_t pool_size; - uint32_t buf_align; - uint32_t buf_stride; - odp_buffer_hdr_t *buf_freelist; - void *blk_freelist; - odp_atomic_u32_t bufcount; - odp_atomic_u32_t blkcount; - _odp_pool_stats_t poolstats; - uint32_t buf_num; - uint32_t seg_size; - uint32_t blk_size; - uint32_t buf_high_wm; - uint32_t buf_low_wm; - uint32_t blk_high_wm; - uint32_t blk_low_wm; - uint32_t headroom; - uint32_t tailroom; - - local_cache_t local_cache[ODP_THREAD_COUNT_MAX] ODP_ALIGNED_CACHE; -}; - -typedef union pool_entry_u { - struct pool_entry_s s; - - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(struct pool_entry_s))]; -} pool_entry_t; - -extern void *pool_entry_ptr[]; - -#if defined(ODP_CONFIG_SECURE_POOLS) && (ODP_CONFIG_SECURE_POOLS == 1) -#define buffer_is_secure(buf) (buf->flags.zeroized) -#define pool_is_secure(pool) (pool->flags.zeroized) -#else -#define buffer_is_secure(buf) 0 -#define pool_is_secure(pool) 0 -#endif - -static inline void *get_blk(struct pool_entry_s *pool) -{ - void *myhead; - uint64_t blkcount; - - POOL_LOCK(&pool->blk_lock); - - myhead = pool->blk_freelist; - - if (odp_unlikely(myhead == NULL)) { - POOL_UNLOCK(&pool->blk_lock); - odp_atomic_inc_u64(&pool->poolstats.blkempty); - } else { - pool->blk_freelist = ((odp_buf_blk_t *)myhead)->next; - POOL_UNLOCK(&pool->blk_lock); - blkcount = odp_atomic_fetch_sub_u32(&pool->blkcount, 1) - 1; - - /* Check for low watermark condition */ - if (blkcount == pool->blk_low_wm && !pool->blk_low_wm_assert) { - pool->blk_low_wm_assert = 1; - odp_atomic_inc_u64(&pool->poolstats.blk_low_wm_count); - } - - odp_atomic_inc_u64(&pool->poolstats.blkallocs); - } - - return myhead; -} - -static inline void ret_blk(struct pool_entry_s *pool, void *block) + /* Ring header */ + ring_t hdr; + + /* Ring data: buffer handles */ + uint32_t buf[CONFIG_POOL_MAX_NUM]; + +} pool_ring_t ODP_ALIGNED_CACHE; + +typedef struct pool_t { + odp_ticketlock_t lock ODP_ALIGNED_CACHE; + + char name[ODP_POOL_NAME_LEN]; + odp_pool_param_t params; + odp_pool_t pool_hdl; + uint32_t pool_idx; + uint32_t ring_mask; + odp_shm_t shm; + odp_shm_t uarea_shm; + int reserved; + uint32_t num; + uint32_t align; + uint32_t headroom; + uint32_t tailroom; + uint32_t data_size; + uint32_t max_len; + uint32_t max_seg_len; + uint32_t uarea_size; + uint32_t block_size; + uint32_t shm_size; + uint32_t uarea_shm_size; + uint8_t *base_addr; + uint8_t *uarea_base_addr; + + pool_cache_t local_cache[ODP_THREAD_COUNT_MAX]; + + pool_ring_t ring; + +} pool_t; + +pool_t *pool_entry(uint32_t pool_idx); + +static inline pool_t *odp_pool_to_entry(odp_pool_t pool_hdl) { - uint64_t blkcount; - - POOL_LOCK(&pool->blk_lock); - - ((odp_buf_blk_t *)block)->next = pool->blk_freelist; - pool->blk_freelist = block; - - POOL_UNLOCK(&pool->blk_lock); - - blkcount = odp_atomic_fetch_add_u32(&pool->blkcount, 1); - - /* Check if low watermark condition should be deasserted */ - if (blkcount == pool->blk_high_wm && pool->blk_low_wm_assert) { - pool->blk_low_wm_assert = 0; - odp_atomic_inc_u64(&pool->poolstats.blk_high_wm_count); - } - - odp_atomic_inc_u64(&pool->poolstats.blkfrees); -} - -static inline odp_pool_t pool_index_to_handle(uint32_t pool_id) -{ - return _odp_cast_scalar(odp_pool_t, pool_id); -} - -static inline uint32_t pool_handle_to_index(odp_pool_t pool_hdl) -{ - return _odp_typeval(pool_hdl); -} - -static inline void *get_pool_entry(uint32_t pool_id) -{ - return pool_entry_ptr[pool_id]; -} - -static inline pool_entry_t *odp_pool_to_entry(odp_pool_t pool) -{ - return (pool_entry_t *)get_pool_entry(pool_handle_to_index(pool)); -} - -static inline pool_entry_t *odp_buf_to_pool(odp_buffer_hdr_t *buf) -{ - return odp_pool_to_entry(buf->pool_hdl); -} - -static inline uint32_t odp_buffer_pool_segment_size(odp_pool_t pool) -{ - return odp_pool_to_entry(pool)->s.seg_size; + return pool_entry(_odp_typeval(pool_hdl)); }
static inline uint32_t odp_buffer_pool_headroom(odp_pool_t pool) { - return odp_pool_to_entry(pool)->s.headroom; + return odp_pool_to_entry(pool)->headroom; }
static inline uint32_t odp_buffer_pool_tailroom(odp_pool_t pool) { - return odp_pool_to_entry(pool)->s.tailroom; + return odp_pool_to_entry(pool)->tailroom; }
-odp_pool_t _pool_create(const char *name, - odp_pool_param_t *params, - uint32_t shmflags); - #ifdef __cplusplus } #endif diff --git a/platform/linux-generic/include/odp_timer_internal.h b/platform/linux-generic/include/odp_timer_internal.h index b1cd73f..91b12c5 100644 --- a/platform/linux-generic/include/odp_timer_internal.h +++ b/platform/linux-generic/include/odp_timer_internal.h @@ -35,8 +35,4 @@ typedef struct { odp_timer_t timer; } odp_timeout_hdr_t;
-typedef struct odp_timeout_hdr_stride { - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_timeout_hdr_t))]; -} odp_timeout_hdr_stride; - #endif diff --git a/platform/linux-generic/odp_buffer.c b/platform/linux-generic/odp_buffer.c index ce2fdba..0ddaf95 100644 --- a/platform/linux-generic/odp_buffer.c +++ b/platform/linux-generic/odp_buffer.c @@ -31,7 +31,6 @@ void *odp_buffer_addr(odp_buffer_t buf) return hdr->addr[0]; }
- uint32_t odp_buffer_size(odp_buffer_t buf) { odp_buffer_hdr_t *hdr = odp_buf_to_hdr(buf); @@ -39,12 +38,6 @@ uint32_t odp_buffer_size(odp_buffer_t buf) return hdr->size; }
-int odp_buffer_is_valid(odp_buffer_t buf) -{ - return validate_buf(buf) != NULL; -} - - int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf) { odp_buffer_hdr_t *hdr; @@ -72,7 +65,6 @@ int odp_buffer_snprint(char *str, uint32_t n, odp_buffer_t buf) return len; }
- void odp_buffer_print(odp_buffer_t buf) { int max_len = 512; diff --git a/platform/linux-generic/odp_classification.c b/platform/linux-generic/odp_classification.c index de72cfb..50a7e54 100644 --- a/platform/linux-generic/odp_classification.c +++ b/platform/linux-generic/odp_classification.c @@ -16,7 +16,6 @@ #include <odp_classification_datamodel.h> #include <odp_classification_inlines.h> #include <odp_classification_internal.h> -#include <odp_pool_internal.h> #include <odp/api/shared_memory.h> #include <protocols/eth.h> #include <protocols/ip.h> @@ -159,7 +158,6 @@ odp_cos_t odp_cls_cos_create(const char *name, odp_cls_cos_param_t *param) { int i, j; queue_entry_t *queue; - pool_entry_t *pool; odp_cls_drop_t drop_policy;
/* Packets are dropped if Queue or Pool is invalid*/ @@ -168,11 +166,6 @@ odp_cos_t odp_cls_cos_create(const char *name, odp_cls_cos_param_t *param) else queue = queue_to_qentry(param->queue);
- if (param->pool == ODP_POOL_INVALID) - pool = NULL; - else - pool = odp_pool_to_entry(param->pool); - drop_policy = param->drop_policy;
for (i = 0; i < ODP_COS_MAX_ENTRY; i++) { @@ -191,7 +184,7 @@ odp_cos_t odp_cls_cos_create(const char *name, odp_cls_cos_param_t *param) cos_tbl->cos_entry[i].s.linked_cos[j] = NULL; } cos_tbl->cos_entry[i].s.queue = queue; - cos_tbl->cos_entry[i].s.pool = pool; + cos_tbl->cos_entry[i].s.pool = param->pool; cos_tbl->cos_entry[i].s.flow_set = 0; cos_tbl->cos_entry[i].s.headroom = 0; cos_tbl->cos_entry[i].s.valid = 1; @@ -555,7 +548,7 @@ odp_pmr_t odp_cls_pmr_create(const odp_pmr_param_t *terms, int num_terms, return id; }
-int odp_cls_cos_pool_set(odp_cos_t cos_id, odp_pool_t pool_id) +int odp_cls_cos_pool_set(odp_cos_t cos_id, odp_pool_t pool) { cos_t *cos;
@@ -565,10 +558,7 @@ int odp_cls_cos_pool_set(odp_cos_t cos_id, odp_pool_t pool_id) return -1; }
- if (pool_id == ODP_POOL_INVALID) - cos->s.pool = NULL; - else - cos->s.pool = odp_pool_to_entry(pool_id); + cos->s.pool = pool;
return 0; } @@ -583,10 +573,7 @@ odp_pool_t odp_cls_cos_pool(odp_cos_t cos_id) return ODP_POOL_INVALID; }
- if (!cos->s.pool) - return ODP_POOL_INVALID; - - return cos->s.pool->s.pool_hdl; + return cos->s.pool; }
int verify_pmr(pmr_t *pmr, const uint8_t *pkt_addr, odp_packet_hdr_t *pkt_hdr) @@ -832,10 +819,10 @@ int cls_classify_packet(pktio_entry_t *entry, const uint8_t *base, if (cos == NULL) return -EINVAL;
- if (cos->s.queue == NULL || cos->s.pool == NULL) + if (cos->s.queue == NULL || cos->s.pool == ODP_POOL_INVALID) return -EFAULT;
- *pool = cos->s.pool->s.pool_hdl; + *pool = cos->s.pool; pkt_hdr->p.input_flags.dst_queue = 1; pkt_hdr->dst_queue = cos->s.queue->s.handle;
diff --git a/platform/linux-generic/odp_crypto.c b/platform/linux-generic/odp_crypto.c index 9e09d42..3ebabb7 100644 --- a/platform/linux-generic/odp_crypto.c +++ b/platform/linux-generic/odp_crypto.c @@ -40,7 +40,9 @@ static odp_crypto_global_t *global; static odp_crypto_generic_op_result_t *get_op_result_from_event(odp_event_t ev) { - return &(odp_packet_hdr(odp_packet_from_event(ev))->op_result); + odp_packet_hdr_t *hdr = odp_packet_hdr(odp_packet_from_event(ev)); + + return &hdr->op_result; }
static diff --git a/platform/linux-generic/odp_packet.c b/platform/linux-generic/odp_packet.c index c2b26fd..6df1c5b 100644 --- a/platform/linux-generic/odp_packet.c +++ b/platform/linux-generic/odp_packet.c @@ -48,7 +48,7 @@ void packet_parse_reset(odp_packet_hdr_t *pkt_hdr) /** * Initialize packet */ -static void packet_init(pool_entry_t *pool, odp_packet_hdr_t *pkt_hdr, +static void packet_init(pool_t *pool, odp_packet_hdr_t *pkt_hdr, size_t size, int parse) { pkt_hdr->p.parsed_layers = LAYER_NONE; @@ -71,10 +71,8 @@ static void packet_init(pool_entry_t *pool, odp_packet_hdr_t *pkt_hdr, * segment occupied by the allocated length. */ pkt_hdr->frame_len = size; - pkt_hdr->headroom = pool->s.headroom; - pkt_hdr->tailroom = - (pool->s.seg_size * pkt_hdr->buf_hdr.segcount) - - (pool->s.headroom + size); + pkt_hdr->headroom = pool->headroom; + pkt_hdr->tailroom = pool->data_size - size + pool->tailroom;
pkt_hdr->input = ODP_PKTIO_INVALID; } @@ -83,10 +81,10 @@ int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, odp_packet_t pkt[], int max_num) { odp_packet_hdr_t *pkt_hdr; - pool_entry_t *pool = odp_pool_to_entry(pool_hdl); + pool_t *pool = odp_pool_to_entry(pool_hdl); int num, i;
- num = buffer_alloc_multi(pool_hdl, len, (odp_buffer_t *)pkt, max_num); + num = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, max_num);
for (i = 0; i < num; i++) { pkt_hdr = odp_packet_hdr(pkt[i]); @@ -101,18 +99,22 @@ int packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len,
odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) { - pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - size_t pkt_size = len ? len : pool->s.params.buf.size; + pool_t *pool = odp_pool_to_entry(pool_hdl); + size_t pkt_size = len ? len : pool->data_size; odp_packet_t pkt; odp_packet_hdr_t *pkt_hdr; + int ret;
- if (pool->s.params.type != ODP_POOL_PACKET) { + if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) { __odp_errno = EINVAL; return ODP_PACKET_INVALID; }
- pkt = (odp_packet_t)buffer_alloc(pool_hdl, pkt_size); - if (pkt == ODP_PACKET_INVALID) + if (odp_unlikely(len > pool->max_len)) + return ODP_PACKET_INVALID; + + ret = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)&pkt, 1); + if (ret != 1) return ODP_PACKET_INVALID;
pkt_hdr = odp_packet_hdr(pkt); @@ -129,17 +131,19 @@ odp_packet_t odp_packet_alloc(odp_pool_t pool_hdl, uint32_t len) int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len, odp_packet_t pkt[], int num) { - pool_entry_t *pool = odp_pool_to_entry(pool_hdl); - size_t pkt_size = len ? len : pool->s.params.buf.size; + pool_t *pool = odp_pool_to_entry(pool_hdl); + size_t pkt_size = len ? len : pool->data_size; int count, i;
- if (pool->s.params.type != ODP_POOL_PACKET) { + if (odp_unlikely(pool->params.type != ODP_POOL_PACKET)) { __odp_errno = EINVAL; return -1; }
- count = buffer_alloc_multi(pool_hdl, pkt_size, - (odp_buffer_t *)pkt, num); + if (odp_unlikely(len > pool->max_len)) + return -1; + + count = buffer_alloc_multi(pool_hdl, (odp_buffer_t *)pkt, num);
for (i = 0; i < count; ++i) { odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt[i]); @@ -157,25 +161,20 @@ int odp_packet_alloc_multi(odp_pool_t pool_hdl, uint32_t len,
void odp_packet_free(odp_packet_t pkt) { - uint32_t pool_id = pool_id_from_buf((odp_buffer_t)pkt); - - buffer_free(pool_id, (odp_buffer_t)pkt); + buffer_free_multi((odp_buffer_t *)&pkt, 1); }
void odp_packet_free_multi(const odp_packet_t pkt[], int num) { - uint32_t pool_id = pool_id_from_buf((odp_buffer_t)pkt[0]); - - buffer_free_multi(pool_id, (const odp_buffer_t * const)pkt, num); + buffer_free_multi((const odp_buffer_t * const)pkt, num); }
int odp_packet_reset(odp_packet_t pkt, uint32_t len) { odp_packet_hdr_t *const pkt_hdr = odp_packet_hdr(pkt); - pool_entry_t *pool = odp_buf_to_pool(&pkt_hdr->buf_hdr); - uint32_t totsize = pool->s.headroom + len + pool->s.tailroom; + pool_t *pool = odp_pool_to_entry(pkt_hdr->buf_hdr.pool_hdl);
- if (totsize > pkt_hdr->buf_hdr.size) + if (len > pool->headroom + pool->data_size + pool->tailroom) return -1;
packet_init(pool, pkt_hdr, len, 0); @@ -381,14 +380,8 @@ void *odp_packet_offset(odp_packet_t pkt, uint32_t offset, uint32_t *len, odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); void *addr = packet_map(pkt_hdr, offset, len);
- if (addr != NULL && seg != NULL) { - odp_buffer_bits_t seghandle; - - seghandle.handle = (odp_buffer_t)pkt; - seghandle.seg = (pkt_hdr->headroom + offset) / - pkt_hdr->buf_hdr.segsize; - *seg = (odp_packet_seg_t)seghandle.handle; - } + if (addr != NULL && seg != NULL) + *seg = (odp_packet_seg_t)pkt;
return addr; } @@ -581,20 +574,19 @@ odp_packet_seg_t odp_packet_first_seg(odp_packet_t pkt)
odp_packet_seg_t odp_packet_last_seg(odp_packet_t pkt) { - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); - odp_buffer_bits_t seghandle; + (void)pkt;
- seghandle.handle = (odp_buffer_t)pkt; - seghandle.seg = pkt_hdr->buf_hdr.segcount - 1; - return (odp_packet_seg_t)seghandle.handle; + /* Only one segment */ + return (odp_packet_seg_t)pkt; }
odp_packet_seg_t odp_packet_next_seg(odp_packet_t pkt, odp_packet_seg_t seg) { - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + (void)pkt; + (void)seg;
- return (odp_packet_seg_t)segment_next(&pkt_hdr->buf_hdr, - (odp_buffer_seg_t)seg); + /* Only one segment */ + return ODP_PACKET_SEG_INVALID; }
/* @@ -606,21 +598,18 @@ odp_packet_seg_t odp_packet_next_seg(odp_packet_t pkt, odp_packet_seg_t seg)
void *odp_packet_seg_data(odp_packet_t pkt, odp_packet_seg_t seg) { - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); + (void)seg;
- return segment_map(&pkt_hdr->buf_hdr, (odp_buffer_seg_t)seg, NULL, - pkt_hdr->frame_len, pkt_hdr->headroom); + /* Only one segment */ + return odp_packet_data(pkt); }
uint32_t odp_packet_seg_data_len(odp_packet_t pkt, odp_packet_seg_t seg) { - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); - uint32_t seglen = 0; + (void)seg;
- segment_map(&pkt_hdr->buf_hdr, (odp_buffer_seg_t)seg, &seglen, - pkt_hdr->frame_len, pkt_hdr->headroom); - - return seglen; + /* Only one segment */ + return odp_packet_seg_len(pkt); }
/* @@ -960,9 +949,13 @@ void odp_packet_print(odp_packet_t pkt)
int odp_packet_is_valid(odp_packet_t pkt) { - odp_buffer_hdr_t *buf = validate_buf((odp_buffer_t)pkt); + if (odp_buffer_is_valid((odp_buffer_t)pkt) == 0) + return 0; + + if (odp_event_type(odp_packet_to_event(pkt)) != ODP_EVENT_PACKET) + return 0;
- return (buf != NULL && buf->type == ODP_EVENT_PACKET); + return 1; }
/* diff --git a/platform/linux-generic/odp_pool.c b/platform/linux-generic/odp_pool.c index 415c9fa..1286753 100644 --- a/platform/linux-generic/odp_pool.c +++ b/platform/linux-generic/odp_pool.c @@ -4,77 +4,71 @@ * SPDX-License-Identifier: BSD-3-Clause */
-#include <odp/api/std_types.h> #include <odp/api/pool.h> -#include <odp_buffer_internal.h> -#include <odp_pool_internal.h> -#include <odp_buffer_inlines.h> -#include <odp_packet_internal.h> -#include <odp_timer_internal.h> -#include <odp_align_internal.h> #include <odp/api/shared_memory.h> #include <odp/api/align.h> +#include <odp/api/ticketlock.h> + +#include <odp_pool_internal.h> #include <odp_internal.h> +#include <odp_buffer_inlines.h> +#include <odp_packet_internal.h> #include <odp_config_internal.h> -#include <odp/api/hints.h> -#include <odp/api/thread.h> #include <odp_debug_internal.h> +#include <odp_ring_internal.h>
#include <string.h> -#include <stdlib.h> +#include <stdio.h> #include <inttypes.h>
-#if ODP_CONFIG_POOLS > ODP_BUFFER_MAX_POOLS -#error ODP_CONFIG_POOLS > ODP_BUFFER_MAX_POOLS -#endif - - -typedef union buffer_type_any_u { - odp_buffer_hdr_t buf; - odp_packet_hdr_t pkt; - odp_timeout_hdr_t tmo; -} odp_anybuf_t; +#include <odp/api/plat/ticketlock_inlines.h> +#define LOCK(a) _odp_ticketlock_lock(a) +#define UNLOCK(a) _odp_ticketlock_unlock(a) +#define LOCK_INIT(a) odp_ticketlock_init(a)
-/* Any buffer type header */ -typedef struct { - union buffer_type_any_u any_hdr; /* any buffer type */ -} odp_any_buffer_hdr_t; - -typedef struct odp_any_hdr_stride { - uint8_t pad[ODP_CACHE_LINE_SIZE_ROUNDUP(sizeof(odp_any_buffer_hdr_t))]; -} odp_any_hdr_stride; +#define CACHE_BURST 32 +#define RING_SIZE_MIN (2 * CACHE_BURST)
+ODP_STATIC_ASSERT(CONFIG_POOL_CACHE_SIZE > (2 * CACHE_BURST), + "cache_burst_size_too_large_compared_to_cache_size");
typedef struct pool_table_t { - pool_entry_t pool[ODP_CONFIG_POOLS]; + pool_t pool[ODP_CONFIG_POOLS]; + odp_shm_t shm; } pool_table_t;
- -/* The pool table */ -static pool_table_t *pool_tbl; -static const char SHM_DEFAULT_NAME[] = "odp_buffer_pools"; - -/* Pool entry pointers (for inlining) */ -void *pool_entry_ptr[ODP_CONFIG_POOLS]; - /* Thread local variables */ typedef struct pool_local_t { - local_cache_t *cache[ODP_CONFIG_POOLS]; + pool_cache_t *cache[ODP_CONFIG_POOLS]; int thr_id; } pool_local_t;
+static pool_table_t *pool_tbl; static __thread pool_local_t local;
-static void flush_cache(local_cache_t *buf_cache, struct pool_entry_s *pool); +static inline odp_pool_t pool_index_to_handle(uint32_t pool_idx) +{ + return _odp_cast_scalar(odp_pool_t, pool_idx); +} + +pool_t *pool_entry(uint32_t pool_idx) +{ + return &pool_tbl->pool[pool_idx]; +} + +static inline pool_t *pool_entry_from_hdl(odp_pool_t pool_hdl) +{ + return &pool_tbl->pool[_odp_typeval(pool_hdl)]; +}
int odp_pool_init_global(void) { uint32_t i; odp_shm_t shm;
- shm = odp_shm_reserve(SHM_DEFAULT_NAME, + shm = odp_shm_reserve("_odp_pool_table", sizeof(pool_table_t), - sizeof(pool_entry_t), 0); + ODP_CACHE_LINE_SIZE, 0);
pool_tbl = odp_shm_addr(shm);
@@ -82,1079 +76,766 @@ int odp_pool_init_global(void) return -1;
memset(pool_tbl, 0, sizeof(pool_table_t)); + pool_tbl->shm = shm;
for (i = 0; i < ODP_CONFIG_POOLS; i++) { - /* init locks */ - pool_entry_t *pool = &pool_tbl->pool[i]; - POOL_LOCK_INIT(&pool->s.lock); - POOL_LOCK_INIT(&pool->s.buf_lock); - POOL_LOCK_INIT(&pool->s.blk_lock); - pool->s.pool_hdl = pool_index_to_handle(i); - pool->s.pool_id = i; - pool_entry_ptr[i] = pool; - odp_atomic_init_u32(&pool->s.bufcount, 0); - odp_atomic_init_u32(&pool->s.blkcount, 0); - - /* Initialize pool statistics counters */ - odp_atomic_init_u64(&pool->s.poolstats.bufallocs, 0); - odp_atomic_init_u64(&pool->s.poolstats.buffrees, 0); - odp_atomic_init_u64(&pool->s.poolstats.blkallocs, 0); - odp_atomic_init_u64(&pool->s.poolstats.blkfrees, 0); - odp_atomic_init_u64(&pool->s.poolstats.bufempty, 0); - odp_atomic_init_u64(&pool->s.poolstats.blkempty, 0); - odp_atomic_init_u64(&pool->s.poolstats.buf_high_wm_count, 0); - odp_atomic_init_u64(&pool->s.poolstats.buf_low_wm_count, 0); - odp_atomic_init_u64(&pool->s.poolstats.blk_high_wm_count, 0); - odp_atomic_init_u64(&pool->s.poolstats.blk_low_wm_count, 0); + pool_t *pool = pool_entry(i); + + LOCK_INIT(&pool->lock); + pool->pool_hdl = pool_index_to_handle(i); + pool->pool_idx = i; }
ODP_DBG("\nPool init global\n"); - ODP_DBG(" pool_entry_s size %zu\n", sizeof(struct pool_entry_s)); - ODP_DBG(" pool_entry_t size %zu\n", sizeof(pool_entry_t)); ODP_DBG(" odp_buffer_hdr_t size %zu\n", sizeof(odp_buffer_hdr_t)); + ODP_DBG(" odp_packet_hdr_t size %zu\n", sizeof(odp_packet_hdr_t)); ODP_DBG("\n"); return 0; }
-int odp_pool_init_local(void) -{ - pool_entry_t *pool; - int i; - int thr_id = odp_thread_id(); - - memset(&local, 0, sizeof(pool_local_t)); - - for (i = 0; i < ODP_CONFIG_POOLS; i++) { - pool = get_pool_entry(i); - local.cache[i] = &pool->s.local_cache[thr_id]; - local.cache[i]->s.num_buf = 0; - } - - local.thr_id = thr_id; - return 0; -} - int odp_pool_term_global(void) { int i; - pool_entry_t *pool; + pool_t *pool; int ret = 0; int rc = 0;
for (i = 0; i < ODP_CONFIG_POOLS; i++) { - pool = get_pool_entry(i); + pool = pool_entry(i);
- POOL_LOCK(&pool->s.lock); - if (pool->s.pool_shm != ODP_SHM_INVALID) { - ODP_ERR("Not destroyed pool: %s\n", pool->s.name); + LOCK(&pool->lock); + if (pool->reserved) { + ODP_ERR("Not destroyed pool: %s\n", pool->name); rc = -1; } - POOL_UNLOCK(&pool->s.lock); + UNLOCK(&pool->lock); }
- ret = odp_shm_free(odp_shm_lookup(SHM_DEFAULT_NAME)); + ret = odp_shm_free(pool_tbl->shm); if (ret < 0) { - ODP_ERR("shm free failed for %s", SHM_DEFAULT_NAME); + ODP_ERR("shm free failed"); rc = -1; }
return rc; }
-int odp_pool_term_local(void) +int odp_pool_init_local(void) { + pool_t *pool; int i; + int thr_id = odp_thread_id();
- for (i = 0; i < ODP_CONFIG_POOLS; i++) { - pool_entry_t *pool = get_pool_entry(i); + memset(&local, 0, sizeof(pool_local_t));
- flush_cache(local.cache[i], &pool->s); + for (i = 0; i < ODP_CONFIG_POOLS; i++) { + pool = pool_entry(i); + local.cache[i] = &pool->local_cache[thr_id]; + local.cache[i]->num = 0; }
+ local.thr_id = thr_id; return 0; }
-int odp_pool_capability(odp_pool_capability_t *capa) +static void flush_cache(pool_cache_t *cache, pool_t *pool) { - memset(capa, 0, sizeof(odp_pool_capability_t)); + ring_t *ring; + uint32_t mask; + uint32_t cache_num, i, data;
- capa->max_pools = ODP_CONFIG_POOLS; + ring = &pool->ring.hdr; + mask = pool->ring_mask; + cache_num = cache->num;
- /* Buffer pools */ - capa->buf.max_pools = ODP_CONFIG_POOLS; - capa->buf.max_align = ODP_CONFIG_BUFFER_ALIGN_MAX; - capa->buf.max_size = 0; - capa->buf.max_num = 0; + for (i = 0; i < cache_num; i++) { + data = (uint32_t)(uintptr_t)cache->buf[i]; + ring_enq(ring, mask, data); + }
- /* Packet pools */ - capa->pkt.max_pools = ODP_CONFIG_POOLS; - capa->pkt.max_len = ODP_CONFIG_PACKET_MAX_SEGS * - ODP_CONFIG_PACKET_SEG_LEN_MIN; - capa->pkt.max_num = 0; - capa->pkt.min_headroom = ODP_CONFIG_PACKET_HEADROOM; - capa->pkt.min_tailroom = ODP_CONFIG_PACKET_TAILROOM; - capa->pkt.max_segs_per_pkt = ODP_CONFIG_PACKET_MAX_SEGS; - capa->pkt.min_seg_len = ODP_CONFIG_PACKET_SEG_LEN_MIN; - capa->pkt.max_seg_len = ODP_CONFIG_PACKET_SEG_LEN_MAX; - capa->pkt.max_uarea_size = 0; + cache->num = 0; +}
- /* Timeout pools */ - capa->tmo.max_pools = ODP_CONFIG_POOLS; - capa->tmo.max_num = 0; +int odp_pool_term_local(void) +{ + int i; + + for (i = 0; i < ODP_CONFIG_POOLS; i++) { + pool_t *pool = pool_entry(i); + + flush_cache(local.cache[i], pool); + }
return 0; }
-static inline odp_buffer_hdr_t *get_buf(struct pool_entry_s *pool) +static pool_t *reserve_pool(void) { - odp_buffer_hdr_t *myhead; - - POOL_LOCK(&pool->buf_lock); - - myhead = pool->buf_freelist; + int i; + pool_t *pool;
- if (odp_unlikely(myhead == NULL)) { - POOL_UNLOCK(&pool->buf_lock); - odp_atomic_inc_u64(&pool->poolstats.bufempty); - } else { - pool->buf_freelist = myhead->next; - POOL_UNLOCK(&pool->buf_lock); + for (i = 0; i < ODP_CONFIG_POOLS; i++) { + pool = pool_entry(i);
- odp_atomic_fetch_sub_u32(&pool->bufcount, 1); - odp_atomic_inc_u64(&pool->poolstats.bufallocs); + LOCK(&pool->lock); + if (pool->reserved == 0) { + pool->reserved = 1; + UNLOCK(&pool->lock); + return pool; + } + UNLOCK(&pool->lock); }
- return (void *)myhead; + return NULL; }
-static inline void ret_buf(struct pool_entry_s *pool, odp_buffer_hdr_t *buf) +static odp_buffer_t form_buffer_handle(uint32_t pool_idx, uint32_t buffer_idx) { - if (!buf->flags.hdrdata && buf->type != ODP_EVENT_BUFFER) { - while (buf->segcount > 0) { - if (buffer_is_secure(buf) || pool_is_secure(pool)) - memset(buf->addr[buf->segcount - 1], - 0, buf->segsize); - ret_blk(pool, buf->addr[--buf->segcount]); - } - buf->size = 0; - } + odp_buffer_bits_t bits;
- buf->allocator = ODP_FREEBUF; /* Mark buffer free */ - POOL_LOCK(&pool->buf_lock); - buf->next = pool->buf_freelist; - pool->buf_freelist = buf; - POOL_UNLOCK(&pool->buf_lock); + bits.handle = 0; + bits.pool_id = pool_idx; + bits.index = buffer_idx;
- odp_atomic_fetch_add_u32(&pool->bufcount, 1); - odp_atomic_inc_u64(&pool->poolstats.buffrees); + return bits.handle; }
-/* - * Pool creation - */ -odp_pool_t _pool_create(const char *name, - odp_pool_param_t *params, - uint32_t shmflags) +static void init_buffers(pool_t *pool) { - odp_pool_t pool_hdl = ODP_POOL_INVALID; - pool_entry_t *pool; - uint32_t i, headroom = 0, tailroom = 0; - odp_shm_t shm; + uint32_t i; + odp_buffer_hdr_t *buf_hdr; + odp_packet_hdr_t *pkt_hdr; + odp_buffer_t buf_hdl; + void *addr; + void *uarea = NULL; + uint8_t *data; + uint32_t offset; + ring_t *ring; + uint32_t mask; + int type; + uint32_t size; + + ring = &pool->ring.hdr; + mask = pool->ring_mask; + type = pool->params.type; + + for (i = 0; i < pool->num; i++) { + addr = &pool->base_addr[i * pool->block_size]; + buf_hdr = addr; + pkt_hdr = addr; + + if (pool->uarea_size) + uarea = &pool->uarea_base_addr[i * pool->uarea_size]; + + data = buf_hdr->data; + + if (type == ODP_POOL_PACKET) + data = pkt_hdr->data; + + offset = pool->headroom; + + /* move to correct align */ + while (((uintptr_t)&data[offset]) % pool->align != 0) + offset++; + + memset(buf_hdr, 0, sizeof(odp_buffer_hdr_t)); + + size = pool->headroom + pool->data_size + pool->tailroom; + + /* Initialize buffer metadata */ + buf_hdr->size = size; + buf_hdr->type = type; + buf_hdr->event_type = type; + buf_hdr->pool_hdl = pool->pool_hdl; + buf_hdr->uarea_addr = uarea; + /* Show user requested size through API */ + buf_hdr->uarea_size = pool->params.pkt.uarea_size; + buf_hdr->segcount = 1; + buf_hdr->segsize = size; + + /* Pointer to data start (of the first segment) */ + buf_hdr->addr[0] = &data[offset]; + + buf_hdl = form_buffer_handle(pool->pool_idx, i); + buf_hdr->handle.handle = buf_hdl; + + /* Store buffer into the global pool */ + ring_enq(ring, mask, (uint32_t)(uintptr_t)buf_hdl); + } +}
- if (params == NULL) +static odp_pool_t pool_create(const char *name, odp_pool_param_t *params, + uint32_t shmflags) +{ + pool_t *pool; + uint32_t uarea_size, headroom, tailroom; + odp_shm_t shm; + uint32_t data_size, align, num, hdr_size, block_size; + uint32_t max_len, max_seg_len; + uint32_t ring_size; + int name_len; + const char *postfix = "_uarea"; + char uarea_name[ODP_POOL_NAME_LEN + sizeof(postfix)]; + + if (params == NULL) { + ODP_ERR("No params"); return ODP_POOL_INVALID; - - /* Default size and align for timeouts */ - if (params->type == ODP_POOL_TIMEOUT) { - params->buf.size = 0; /* tmo.__res1 */ - params->buf.align = 0; /* tmo.__res2 */ }
- /* Default initialization parameters */ - uint32_t p_udata_size = 0; - uint32_t udata_stride = 0; + align = 0;
- /* Restriction for v1.0: All non-packet buffers are unsegmented */ - int unseg = 1; + if (params->type == ODP_POOL_BUFFER) + align = params->buf.align;
- uint32_t blk_size, buf_stride, buf_num, blk_num, seg_len = 0; - uint32_t buf_align = - params->type == ODP_POOL_BUFFER ? params->buf.align : 0; + if (align < ODP_CONFIG_BUFFER_ALIGN_MIN) + align = ODP_CONFIG_BUFFER_ALIGN_MIN;
/* Validate requested buffer alignment */ - if (buf_align > ODP_CONFIG_BUFFER_ALIGN_MAX || - buf_align != ODP_ALIGN_ROUNDDOWN_POWER_2(buf_align, buf_align)) + if (align > ODP_CONFIG_BUFFER_ALIGN_MAX || + align != ODP_ALIGN_ROUNDDOWN_POWER_2(align, align)) { + ODP_ERR("Bad align requirement"); return ODP_POOL_INVALID; + }
- /* Set correct alignment based on input request */ - if (buf_align == 0) - buf_align = ODP_CACHE_LINE_SIZE; - else if (buf_align < ODP_CONFIG_BUFFER_ALIGN_MIN) - buf_align = ODP_CONFIG_BUFFER_ALIGN_MIN; + headroom = 0; + tailroom = 0; + data_size = 0; + max_len = 0; + max_seg_len = 0; + uarea_size = 0;
- /* Calculate space needed for buffer blocks and metadata */ switch (params->type) { case ODP_POOL_BUFFER: - buf_num = params->buf.num; - blk_size = params->buf.size; - - /* Optimize small raw buffers */ - if (blk_size > ODP_MAX_INLINE_BUF || params->buf.align != 0) - blk_size = ODP_ALIGN_ROUNDUP(blk_size, buf_align); - - buf_stride = sizeof(odp_buffer_hdr_stride); + num = params->buf.num; + data_size = params->buf.size; break;
case ODP_POOL_PACKET: - unseg = 0; /* Packets are always segmented */ - headroom = ODP_CONFIG_PACKET_HEADROOM; - tailroom = ODP_CONFIG_PACKET_TAILROOM; - buf_num = params->pkt.num; - - seg_len = params->pkt.seg_len <= ODP_CONFIG_PACKET_SEG_LEN_MIN ? - ODP_CONFIG_PACKET_SEG_LEN_MIN : - (params->pkt.seg_len <= ODP_CONFIG_PACKET_SEG_LEN_MAX ? - params->pkt.seg_len : ODP_CONFIG_PACKET_SEG_LEN_MAX); - - seg_len = ODP_ALIGN_ROUNDUP( - headroom + seg_len + tailroom, - ODP_CONFIG_BUFFER_ALIGN_MIN); - - blk_size = params->pkt.len <= seg_len ? seg_len : - ODP_ALIGN_ROUNDUP(params->pkt.len, seg_len); - - /* Reject create if pkt.len needs too many segments */ - if (blk_size / seg_len > ODP_BUFFER_MAX_SEG) { - ODP_ERR("ODP_BUFFER_MAX_SEG exceed %d(%d)\n", - blk_size / seg_len, ODP_BUFFER_MAX_SEG); + headroom = ODP_CONFIG_PACKET_HEADROOM; + tailroom = ODP_CONFIG_PACKET_TAILROOM; + num = params->pkt.num; + uarea_size = params->pkt.uarea_size; + + data_size = ODP_CONFIG_PACKET_SEG_LEN_MAX; + + if (data_size < ODP_CONFIG_PACKET_SEG_LEN_MIN) + data_size = ODP_CONFIG_PACKET_SEG_LEN_MIN; + + if (data_size > ODP_CONFIG_PACKET_SEG_LEN_MAX) { + ODP_ERR("Too large seg len requirement"); return ODP_POOL_INVALID; }
- p_udata_size = params->pkt.uarea_size; - udata_stride = ODP_ALIGN_ROUNDUP(p_udata_size, - sizeof(uint64_t)); - - buf_stride = sizeof(odp_packet_hdr_stride); + max_seg_len = ODP_CONFIG_PACKET_SEG_LEN_MAX - + ODP_CONFIG_PACKET_HEADROOM - + ODP_CONFIG_PACKET_TAILROOM; + max_len = ODP_CONFIG_PACKET_MAX_SEGS * max_seg_len; break;
case ODP_POOL_TIMEOUT: - blk_size = 0; - buf_num = params->tmo.num; - buf_stride = sizeof(odp_timeout_hdr_stride); + num = params->tmo.num; break;
default: + ODP_ERR("Bad pool type"); return ODP_POOL_INVALID; }
- /* Validate requested number of buffers against addressable limits */ - if (buf_num > - (ODP_BUFFER_MAX_BUFFERS / (buf_stride / ODP_CACHE_LINE_SIZE))) { - ODP_ERR("buf_num %d > then expected %d\n", - buf_num, ODP_BUFFER_MAX_BUFFERS / - (buf_stride / ODP_CACHE_LINE_SIZE)); + if (uarea_size) + uarea_size = ODP_CACHE_LINE_SIZE_ROUNDUP(uarea_size); + + pool = reserve_pool(); + + if (pool == NULL) { + ODP_ERR("No more free pools"); return ODP_POOL_INVALID; }
- /* Find an unused buffer pool slot and iniitalize it as requested */ - for (i = 0; i < ODP_CONFIG_POOLS; i++) { - pool = get_pool_entry(i); + if (name == NULL) { + pool->name[0] = 0; + } else { + strncpy(pool->name, name, + ODP_POOL_NAME_LEN - 1); + pool->name[ODP_POOL_NAME_LEN - 1] = 0; + }
- POOL_LOCK(&pool->s.lock); - if (pool->s.pool_shm != ODP_SHM_INVALID) { - POOL_UNLOCK(&pool->s.lock); - continue; - } + name_len = strlen(pool->name); + memcpy(uarea_name, pool->name, name_len); + strcpy(&uarea_name[name_len], postfix);
- /* found free pool */ - size_t block_size, pad_size, mdata_size, udata_size; + pool->params = *params;
- pool->s.flags.all = 0; + hdr_size = sizeof(odp_packet_hdr_t); + hdr_size = ODP_CACHE_LINE_SIZE_ROUNDUP(hdr_size);
- if (name == NULL) { - pool->s.name[0] = 0; - } else { - strncpy(pool->s.name, name, - ODP_POOL_NAME_LEN - 1); - pool->s.name[ODP_POOL_NAME_LEN - 1] = 0; - pool->s.flags.has_name = 1; - } + block_size = ODP_CACHE_LINE_SIZE_ROUNDUP(hdr_size + align + headroom + + data_size + tailroom);
- pool->s.params = *params; - pool->s.buf_align = buf_align; + if (num <= RING_SIZE_MIN) + ring_size = RING_SIZE_MIN; + else + ring_size = ODP_ROUNDUP_POWER_2(num);
- /* Optimize for short buffers: Data stored in buffer hdr */ - if (blk_size <= ODP_MAX_INLINE_BUF) { - block_size = 0; - pool->s.buf_align = blk_size == 0 ? 0 : sizeof(void *); - } else { - block_size = buf_num * blk_size; - pool->s.buf_align = buf_align; - } + pool->ring_mask = ring_size - 1; + pool->num = num; + pool->align = align; + pool->headroom = headroom; + pool->data_size = data_size; + pool->max_len = max_len; + pool->max_seg_len = max_seg_len; + pool->tailroom = tailroom; + pool->block_size = block_size; + pool->uarea_size = uarea_size; + pool->shm_size = num * block_size; + pool->uarea_shm_size = num * uarea_size;
- pad_size = ODP_CACHE_LINE_SIZE_ROUNDUP(block_size) - block_size; - mdata_size = buf_num * buf_stride; - udata_size = buf_num * udata_stride; + shm = odp_shm_reserve(pool->name, pool->shm_size, + ODP_PAGE_SIZE, shmflags);
- pool->s.buf_num = buf_num; - pool->s.pool_size = ODP_PAGE_SIZE_ROUNDUP(block_size + - pad_size + - mdata_size + - udata_size); + pool->shm = shm;
- shm = odp_shm_reserve(pool->s.name, - pool->s.pool_size, - ODP_PAGE_SIZE, shmflags); - if (shm == ODP_SHM_INVALID) { - POOL_UNLOCK(&pool->s.lock); - return ODP_POOL_INVALID; - } - pool->s.pool_base_addr = odp_shm_addr(shm); - pool->s.pool_shm = shm; - - /* Now safe to unlock since pool entry has been allocated */ - POOL_UNLOCK(&pool->s.lock); - - pool->s.flags.unsegmented = unseg; - pool->s.seg_size = unseg ? blk_size : seg_len; - pool->s.blk_size = blk_size; - - uint8_t *block_base_addr = pool->s.pool_base_addr; - uint8_t *mdata_base_addr = - block_base_addr + block_size + pad_size; - uint8_t *udata_base_addr = mdata_base_addr + mdata_size; - - /* Pool mdata addr is used for indexing buffer metadata */ - pool->s.pool_mdata_addr = mdata_base_addr; - pool->s.udata_size = p_udata_size; - - pool->s.buf_stride = buf_stride; - pool->s.buf_freelist = NULL; - pool->s.blk_freelist = NULL; - - /* Initialization will increment these to their target vals */ - odp_atomic_store_u32(&pool->s.bufcount, 0); - odp_atomic_store_u32(&pool->s.blkcount, 0); - - uint8_t *buf = udata_base_addr - buf_stride; - uint8_t *udat = udata_stride == 0 ? NULL : - udata_base_addr + udata_size - udata_stride; - - /* Init buffer common header and add to pool buffer freelist */ - do { - odp_buffer_hdr_t *tmp = - (odp_buffer_hdr_t *)(void *)buf; - - /* Iniitalize buffer metadata */ - tmp->allocator = ODP_FREEBUF; - tmp->flags.all = 0; - tmp->size = 0; - tmp->type = params->type; - tmp->event_type = params->type; - tmp->pool_hdl = pool->s.pool_hdl; - tmp->uarea_addr = (void *)udat; - tmp->uarea_size = p_udata_size; - tmp->segcount = 0; - tmp->segsize = pool->s.seg_size; - tmp->handle.handle = odp_buffer_encode_handle(tmp); - - /* Set 1st seg addr for zero-len buffers */ - tmp->addr[0] = NULL; - - /* Special case for short buffer data */ - if (blk_size <= ODP_MAX_INLINE_BUF) { - tmp->flags.hdrdata = 1; - if (blk_size > 0) { - tmp->segcount = 1; - tmp->addr[0] = &tmp->addr[1]; - tmp->size = blk_size; - } - } - - /* Push buffer onto pool's freelist */ - ret_buf(&pool->s, tmp); - buf -= buf_stride; - udat -= udata_stride; - } while (buf >= mdata_base_addr); - - /* Form block freelist for pool */ - uint8_t *blk = - block_base_addr + block_size - pool->s.seg_size; - - if (blk_size > ODP_MAX_INLINE_BUF) - do { - ret_blk(&pool->s, blk); - blk -= pool->s.seg_size; - } while (blk >= block_base_addr); - - blk_num = odp_atomic_load_u32(&pool->s.blkcount); - - /* Initialize pool statistics counters */ - odp_atomic_store_u64(&pool->s.poolstats.bufallocs, 0); - odp_atomic_store_u64(&pool->s.poolstats.buffrees, 0); - odp_atomic_store_u64(&pool->s.poolstats.blkallocs, 0); - odp_atomic_store_u64(&pool->s.poolstats.blkfrees, 0); - odp_atomic_store_u64(&pool->s.poolstats.bufempty, 0); - odp_atomic_store_u64(&pool->s.poolstats.blkempty, 0); - odp_atomic_store_u64(&pool->s.poolstats.buf_high_wm_count, 0); - odp_atomic_store_u64(&pool->s.poolstats.buf_low_wm_count, 0); - odp_atomic_store_u64(&pool->s.poolstats.blk_high_wm_count, 0); - odp_atomic_store_u64(&pool->s.poolstats.blk_low_wm_count, 0); - - /* Reset other pool globals to initial state */ - pool->s.buf_low_wm_assert = 0; - pool->s.blk_low_wm_assert = 0; - pool->s.quiesced = 0; - pool->s.headroom = headroom; - pool->s.tailroom = tailroom; - - /* Watermarks are hard-coded for now to control caching */ - pool->s.buf_high_wm = buf_num / 2; - pool->s.buf_low_wm = buf_num / 4; - pool->s.blk_high_wm = blk_num / 2; - pool->s.blk_low_wm = blk_num / 4; - - pool_hdl = pool->s.pool_hdl; - break; + if (shm == ODP_SHM_INVALID) { + ODP_ERR("Shm reserve failed"); + goto error; }
- return pool_hdl; -} + pool->base_addr = odp_shm_addr(pool->shm);
-odp_pool_t odp_pool_create(const char *name, - odp_pool_param_t *params) -{ -#ifdef _ODP_PKTIO_IPC - if (params && (params->type == ODP_POOL_PACKET)) - return _pool_create(name, params, ODP_SHM_PROC); -#endif - return _pool_create(name, params, 0); - -} - -odp_pool_t odp_pool_lookup(const char *name) -{ - uint32_t i; - pool_entry_t *pool; + pool->uarea_shm = ODP_SHM_INVALID; + if (uarea_size) { + shm = odp_shm_reserve(uarea_name, pool->uarea_shm_size, + ODP_PAGE_SIZE, shmflags);
- for (i = 0; i < ODP_CONFIG_POOLS; i++) { - pool = get_pool_entry(i); + pool->uarea_shm = shm;
- POOL_LOCK(&pool->s.lock); - if (strcmp(name, pool->s.name) == 0) { - /* found it */ - POOL_UNLOCK(&pool->s.lock); - return pool->s.pool_hdl; + if (shm == ODP_SHM_INVALID) { + ODP_ERR("Shm reserve failed (uarea)"); + goto error; } - POOL_UNLOCK(&pool->s.lock); + + pool->uarea_base_addr = odp_shm_addr(pool->uarea_shm); }
- return ODP_POOL_INVALID; -} + ring_init(&pool->ring.hdr); + init_buffers(pool);
-int odp_pool_info(odp_pool_t pool_hdl, odp_pool_info_t *info) -{ - uint32_t pool_id = pool_handle_to_index(pool_hdl); - pool_entry_t *pool = get_pool_entry(pool_id); + return pool->pool_hdl;
- if (pool == NULL || info == NULL) - return -1; +error: + if (pool->shm != ODP_SHM_INVALID) + odp_shm_free(pool->shm);
- info->name = pool->s.name; - info->params = pool->s.params; + if (pool->uarea_shm != ODP_SHM_INVALID) + odp_shm_free(pool->uarea_shm);
- return 0; + LOCK(&pool->lock); + pool->reserved = 0; + UNLOCK(&pool->lock); + return ODP_POOL_INVALID; }
-static inline void get_local_cache_bufs(local_cache_t *buf_cache, uint32_t idx, - odp_buffer_hdr_t *buf_hdr[], - uint32_t num) -{ - uint32_t i;
- for (i = 0; i < num; i++) { - buf_hdr[i] = buf_cache->s.buf[idx + i]; - odp_prefetch(buf_hdr[i]); - odp_prefetch_store(buf_hdr[i]); - } -} - -static void flush_cache(local_cache_t *buf_cache, struct pool_entry_s *pool) +odp_pool_t odp_pool_create(const char *name, odp_pool_param_t *params) { - uint32_t flush_count = 0; - uint32_t num; - - while ((num = buf_cache->s.num_buf)) { - odp_buffer_hdr_t *buf; - - buf = buf_cache->s.buf[num - 1]; - ret_buf(pool, buf); - flush_count++; - buf_cache->s.num_buf--; - } - - odp_atomic_add_u64(&pool->poolstats.bufallocs, buf_cache->s.bufallocs); - odp_atomic_add_u64(&pool->poolstats.buffrees, - buf_cache->s.buffrees - flush_count); - - buf_cache->s.bufallocs = 0; - buf_cache->s.buffrees = 0; +#ifdef _ODP_PKTIO_IPC + if (params && (params->type == ODP_POOL_PACKET)) + return pool_create(name, params, ODP_SHM_PROC); +#endif + return pool_create(name, params, 0); }
int odp_pool_destroy(odp_pool_t pool_hdl) { - uint32_t pool_id = pool_handle_to_index(pool_hdl); - pool_entry_t *pool = get_pool_entry(pool_id); + pool_t *pool = pool_entry_from_hdl(pool_hdl); int i;
if (pool == NULL) return -1;
- POOL_LOCK(&pool->s.lock); + LOCK(&pool->lock);
- /* Call fails if pool is not allocated or predefined*/ - if (pool->s.pool_shm == ODP_SHM_INVALID || - pool->s.flags.predefined) { - POOL_UNLOCK(&pool->s.lock); - ODP_ERR("invalid shm for pool %s\n", pool->s.name); + if (pool->reserved == 0) { + UNLOCK(&pool->lock); + ODP_ERR("Pool not created\n"); return -1; }
/* Make sure local caches are empty */ for (i = 0; i < ODP_THREAD_COUNT_MAX; i++) - flush_cache(&pool->s.local_cache[i], &pool->s); - - /* Call fails if pool has allocated buffers */ - if (odp_atomic_load_u32(&pool->s.bufcount) < pool->s.buf_num) { - POOL_UNLOCK(&pool->s.lock); - ODP_DBG("error: pool has allocated buffers %d/%d\n", - odp_atomic_load_u32(&pool->s.bufcount), - pool->s.buf_num); - return -1; - } + flush_cache(&pool->local_cache[i], pool);
- odp_shm_free(pool->s.pool_shm); - pool->s.pool_shm = ODP_SHM_INVALID; - POOL_UNLOCK(&pool->s.lock); + odp_shm_free(pool->shm); + + if (pool->uarea_shm != ODP_SHM_INVALID) + odp_shm_free(pool->uarea_shm); + + pool->reserved = 0; + UNLOCK(&pool->lock);
return 0; }
-int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount) +odp_buffer_hdr_t *odp_buf_to_hdr(odp_buffer_t buf) { - uint32_t pool_id = pool_handle_to_index(buf_hdr->pool_hdl); - pool_entry_t *pool = get_pool_entry(pool_id); - void *newsegs[segcount]; - int i; + odp_buffer_bits_t handle; + uint32_t pool_id, index, block_offset; + pool_t *pool; + odp_buffer_hdr_t *buf_hdr;
- for (i = 0; i < segcount; i++) { - newsegs[i] = get_blk(&pool->s); - if (newsegs[i] == NULL) { - while (--i >= 0) - ret_blk(&pool->s, newsegs[i]); - return -1; - } - } + handle.handle = buf; + pool_id = handle.pool_id; + index = handle.index; + pool = pool_entry(pool_id); + block_offset = index * pool->block_size;
- for (i = buf_hdr->segcount - 1; i >= 0; i--) - buf_hdr->addr[i + segcount] = buf_hdr->addr[i]; + /* clang requires cast to uintptr_t */ + buf_hdr = (odp_buffer_hdr_t *)(uintptr_t)&pool->base_addr[block_offset];
- for (i = 0; i < segcount; i++) - buf_hdr->addr[i] = newsegs[i]; + return buf_hdr; +}
- buf_hdr->segcount += segcount; - buf_hdr->size = buf_hdr->segcount * pool->s.seg_size; - return 0; +odp_event_type_t _odp_buffer_event_type(odp_buffer_t buf) +{ + return odp_buf_to_hdr(buf)->event_type; }
-void seg_free_head(odp_buffer_hdr_t *buf_hdr, int segcount) +void _odp_buffer_event_type_set(odp_buffer_t buf, int ev) { - uint32_t pool_id = pool_handle_to_index(buf_hdr->pool_hdl); - pool_entry_t *pool = get_pool_entry(pool_id); - int s_cnt = buf_hdr->segcount; - int i; + odp_buf_to_hdr(buf)->event_type = ev; +}
- for (i = 0; i < segcount; i++) - ret_blk(&pool->s, buf_hdr->addr[i]); +void *buffer_map(odp_buffer_hdr_t *buf, + uint32_t offset, + uint32_t *seglen, + uint32_t limit) +{ + int seg_index; + int seg_offset;
- for (i = 0; i < s_cnt - segcount; i++) - buf_hdr->addr[i] = buf_hdr->addr[i + segcount]; + if (odp_likely(offset < buf->segsize)) { + seg_index = 0; + seg_offset = offset; + } else { + ODP_ERR("\nSEGMENTS NOT SUPPORTED\n"); + return NULL; + }
- buf_hdr->segcount -= segcount; - buf_hdr->size = buf_hdr->segcount * pool->s.seg_size; + if (seglen != NULL) { + uint32_t buf_left = limit - offset; + *seglen = seg_offset + buf_left <= buf->segsize ? + buf_left : buf->segsize - seg_offset; + } + + return (void *)(seg_offset + (uint8_t *)buf->addr[seg_index]); }
-int seg_alloc_tail(odp_buffer_hdr_t *buf_hdr, int segcount) +odp_pool_t odp_pool_lookup(const char *name) { - uint32_t pool_id = pool_handle_to_index(buf_hdr->pool_hdl); - pool_entry_t *pool = get_pool_entry(pool_id); - uint32_t s_cnt = buf_hdr->segcount; - int i; + uint32_t i; + pool_t *pool;
- for (i = 0; i < segcount; i++) { - buf_hdr->addr[s_cnt + i] = get_blk(&pool->s); - if (buf_hdr->addr[s_cnt + i] == NULL) { - while (--i >= 0) - ret_blk(&pool->s, buf_hdr->addr[s_cnt + i]); - return -1; + for (i = 0; i < ODP_CONFIG_POOLS; i++) { + pool = pool_entry(i); + + LOCK(&pool->lock); + if (strcmp(name, pool->name) == 0) { + /* found it */ + UNLOCK(&pool->lock); + return pool->pool_hdl; } + UNLOCK(&pool->lock); }
- buf_hdr->segcount += segcount; - buf_hdr->size = buf_hdr->segcount * pool->s.seg_size; - return 0; + return ODP_POOL_INVALID; }
-void seg_free_tail(odp_buffer_hdr_t *buf_hdr, int segcount) +int odp_pool_info(odp_pool_t pool_hdl, odp_pool_info_t *info) { - uint32_t pool_id = pool_handle_to_index(buf_hdr->pool_hdl); - pool_entry_t *pool = get_pool_entry(pool_id); - int s_cnt = buf_hdr->segcount; - int i; + pool_t *pool = pool_entry_from_hdl(pool_hdl);
- for (i = s_cnt - 1; i >= s_cnt - segcount; i--) - ret_blk(&pool->s, buf_hdr->addr[i]); + if (pool == NULL || info == NULL) + return -1;
- buf_hdr->segcount -= segcount; - buf_hdr->size = buf_hdr->segcount * pool->s.seg_size; + info->name = pool->name; + info->params = pool->params; + + return 0; }
-static inline int get_local_bufs(local_cache_t *buf_cache, - odp_buffer_hdr_t *buf_hdr[], uint32_t max_num) +int buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int max_num) { - uint32_t num_buf = buf_cache->s.num_buf; - uint32_t num = num_buf; + pool_t *pool; + ring_t *ring; + uint32_t mask; + int i; + pool_cache_t *cache; + uint32_t cache_num;
- if (odp_unlikely(num_buf == 0)) - return 0; + pool = pool_entry_from_hdl(pool_hdl); + ring = &pool->ring.hdr; + mask = pool->ring_mask; + cache = local.cache[_odp_typeval(pool_hdl)];
- if (odp_likely(max_num < num)) - num = max_num; + cache_num = cache->num;
- get_local_cache_bufs(buf_cache, num_buf - num, buf_hdr, num); - buf_cache->s.num_buf -= num; - buf_cache->s.bufallocs += num; + if (odp_likely((int)cache_num >= max_num)) { + for (i = 0; i < max_num; i++) + buf[i] = cache->buf[cache_num - max_num + i];
- return num; -} + cache->num = cache_num - max_num; + return max_num; + }
-static inline void ret_local_buf(local_cache_t *buf_cache, uint32_t idx, - odp_buffer_hdr_t *buf) -{ - buf_cache->s.buf[idx] = buf; - buf_cache->s.num_buf++; - buf_cache->s.buffrees++; -} + for (i = 0; i < max_num; i++) { + uint32_t data;
-static inline void ret_local_bufs(local_cache_t *buf_cache, uint32_t idx, - odp_buffer_hdr_t *buf[], int num_buf) -{ - int i; + data = ring_deq(ring, mask); + + if (data == RING_EMPTY) + break;
- for (i = 0; i < num_buf; i++) - buf_cache->s.buf[idx + i] = buf[i]; + buf[i] = (odp_buffer_t)(uintptr_t)data; + }
- buf_cache->s.num_buf += num_buf; - buf_cache->s.buffrees += num_buf; + return i; }
-int buffer_alloc_multi(odp_pool_t pool_hdl, size_t size, - odp_buffer_t buf[], int max_num) +static inline void buffer_free_to_pool(uint32_t pool_id, + const odp_buffer_t buf[], int num) { - uint32_t pool_id = pool_handle_to_index(pool_hdl); - pool_entry_t *pool = get_pool_entry(pool_id); - uintmax_t totsize = pool->s.headroom + size + pool->s.tailroom; - odp_buffer_hdr_t *buf_tbl[max_num]; - odp_buffer_hdr_t *buf_hdr; - int num, i; - intmax_t needed; - void *blk; - - /* Reject oversized allocation requests */ - if ((pool->s.flags.unsegmented && totsize > pool->s.seg_size) || - (!pool->s.flags.unsegmented && - totsize > pool->s.seg_size * ODP_BUFFER_MAX_SEG)) - return 0; + pool_t *pool; + int i; + ring_t *ring; + uint32_t mask; + pool_cache_t *cache; + uint32_t cache_num; + + cache = local.cache[pool_id]; + pool = pool_entry(pool_id); + ring = &pool->ring.hdr; + mask = pool->ring_mask; + + /* Special case of a very large free. Move directly to + * the global pool. */ + if (odp_unlikely(num > CONFIG_POOL_CACHE_SIZE)) { + for (i = 0; i < num; i++) + ring_enq(ring, mask, (uint32_t)(uintptr_t)buf[i]);
- /* Try to satisfy request from the local cache */ - num = get_local_bufs(local.cache[pool_id], buf_tbl, max_num); - - /* If cache is empty, satisfy request from the pool */ - if (odp_unlikely(num < max_num)) { - for (; num < max_num; num++) { - buf_hdr = get_buf(&pool->s); - - if (odp_unlikely(buf_hdr == NULL)) - goto pool_empty; - - /* Get blocks for this buffer, if pool uses - * application data */ - if (buf_hdr->size < totsize) { - uint32_t segcount; - - needed = totsize - buf_hdr->size; - do { - blk = get_blk(&pool->s); - if (odp_unlikely(blk == NULL)) { - ret_buf(&pool->s, buf_hdr); - goto pool_empty; - } - - segcount = buf_hdr->segcount++; - buf_hdr->addr[segcount] = blk; - needed -= pool->s.seg_size; - } while (needed > 0); - buf_hdr->size = buf_hdr->segcount * - pool->s.seg_size; - } - - buf_tbl[num] = buf_hdr; - } + return; }
-pool_empty: - for (i = 0; i < num; i++) { - buf_hdr = buf_tbl[i]; - - /* Mark buffer as allocated */ - buf_hdr->allocator = local.thr_id; - - /* By default, buffers are not associated with - * an ordered queue */ - buf_hdr->origin_qe = NULL; + /* Make room into local cache if needed. Do at least burst size + * transfer. */ + cache_num = cache->num;
- buf[i] = odp_hdr_to_buf(buf_hdr); + if (odp_unlikely((int)(CONFIG_POOL_CACHE_SIZE - cache_num) < num)) { + int burst = CACHE_BURST;
- /* Add more segments if buffer from local cache is too small */ - if (odp_unlikely(buf_hdr->size < totsize)) { - needed = totsize - buf_hdr->size; - do { - blk = get_blk(&pool->s); - if (odp_unlikely(blk == NULL)) { - int j; + if (odp_unlikely(num > CACHE_BURST)) + burst = num;
- ret_buf(&pool->s, buf_hdr); - buf_hdr = NULL; - local.cache[pool_id]->s.buffrees--; + for (i = 0; i < burst; i++) { + uint32_t data, index;
- /* move remaining bufs up one step - * and update loop counters */ - num--; - for (j = i; j < num; j++) - buf_tbl[j] = buf_tbl[j + 1]; - - i--; - break; - } - needed -= pool->s.seg_size; - buf_hdr->addr[buf_hdr->segcount++] = blk; - buf_hdr->size = buf_hdr->segcount * - pool->s.seg_size; - } while (needed > 0); + index = cache_num - burst + i; + data = (uint32_t)(uintptr_t)cache->buf[index]; + ring_enq(ring, mask, data); } + + cache_num -= burst; }
- return num; + for (i = 0; i < num; i++) + cache->buf[cache_num + i] = buf[i]; + + cache->num = cache_num + num; }
-odp_buffer_t buffer_alloc(odp_pool_t pool_hdl, size_t size) +void buffer_free_multi(const odp_buffer_t buf[], int num_total) { - uint32_t pool_id = pool_handle_to_index(pool_hdl); - pool_entry_t *pool = get_pool_entry(pool_id); - uintmax_t totsize = pool->s.headroom + size + pool->s.tailroom; - odp_buffer_hdr_t *buf_hdr; - intmax_t needed; - void *blk; + uint32_t pool_id; + int num; + int i; + int first = 0;
- /* Reject oversized allocation requests */ - if ((pool->s.flags.unsegmented && totsize > pool->s.seg_size) || - (!pool->s.flags.unsegmented && - totsize > pool->s.seg_size * ODP_BUFFER_MAX_SEG)) - return 0; + while (1) { + num = 1; + i = 1; + pool_id = pool_id_from_buf(buf[first]);
- /* Try to satisfy request from the local cache. If cache is empty, - * satisfy request from the pool */ - if (odp_unlikely(!get_local_bufs(local.cache[pool_id], &buf_hdr, 1))) { - buf_hdr = get_buf(&pool->s); - - if (odp_unlikely(buf_hdr == NULL)) - return ODP_BUFFER_INVALID; - - /* Get blocks for this buffer, if pool uses application data */ - if (buf_hdr->size < totsize) { - needed = totsize - buf_hdr->size; - do { - blk = get_blk(&pool->s); - if (odp_unlikely(blk == NULL)) { - ret_buf(&pool->s, buf_hdr); - return ODP_BUFFER_INVALID; - } - buf_hdr->addr[buf_hdr->segcount++] = blk; - needed -= pool->s.seg_size; - } while (needed > 0); - buf_hdr->size = buf_hdr->segcount * pool->s.seg_size; + /* 'num' buffers are from the same pool */ + if (num_total > 1) { + for (i = first; i < num_total; i++) + if (pool_id != pool_id_from_buf(buf[i])) + break; + + num = i - first; } - } - /* Mark buffer as allocated */ - buf_hdr->allocator = local.thr_id; - - /* By default, buffers are not associated with - * an ordered queue */ - buf_hdr->origin_qe = NULL; - - /* Add more segments if buffer from local cache is too small */ - if (odp_unlikely(buf_hdr->size < totsize)) { - needed = totsize - buf_hdr->size; - do { - blk = get_blk(&pool->s); - if (odp_unlikely(blk == NULL)) { - ret_buf(&pool->s, buf_hdr); - buf_hdr = NULL; - local.cache[pool_id]->s.buffrees--; - return ODP_BUFFER_INVALID; - } - buf_hdr->addr[buf_hdr->segcount++] = blk; - needed -= pool->s.seg_size; - } while (needed > 0); - buf_hdr->size = buf_hdr->segcount * pool->s.seg_size; - }
- return odp_hdr_to_buf(buf_hdr); + buffer_free_to_pool(pool_id, &buf[first], num); + + if (i == num_total) + return; + + first = i; + } }
odp_buffer_t odp_buffer_alloc(odp_pool_t pool_hdl) { - return buffer_alloc(pool_hdl, - odp_pool_to_entry(pool_hdl)->s.params.buf.size); + odp_buffer_t buf; + int ret; + + ret = buffer_alloc_multi(pool_hdl, &buf, 1); + + if (odp_likely(ret == 1)) + return buf; + + return ODP_BUFFER_INVALID; }
int odp_buffer_alloc_multi(odp_pool_t pool_hdl, odp_buffer_t buf[], int num) { - size_t buf_size = odp_pool_to_entry(pool_hdl)->s.params.buf.size; - - return buffer_alloc_multi(pool_hdl, buf_size, buf, num); + return buffer_alloc_multi(pool_hdl, buf, num); }
-static void multi_pool_free(odp_buffer_hdr_t *buf_hdr[], int num_buf) +void odp_buffer_free(odp_buffer_t buf) { - uint32_t pool_id, num; - local_cache_t *buf_cache; - pool_entry_t *pool; - int i, j, idx; - - for (i = 0; i < num_buf; i++) { - pool_id = pool_handle_to_index(buf_hdr[i]->pool_hdl); - buf_cache = local.cache[pool_id]; - num = buf_cache->s.num_buf; - - if (num < POOL_MAX_LOCAL_BUFS) { - ret_local_buf(buf_cache, num, buf_hdr[i]); - continue; - } - - idx = POOL_MAX_LOCAL_BUFS - POOL_CHUNK_SIZE; - pool = get_pool_entry(pool_id); - - /* local cache full, return a chunk */ - for (j = 0; j < POOL_CHUNK_SIZE; j++) { - odp_buffer_hdr_t *tmp; - - tmp = buf_cache->s.buf[idx + i]; - ret_buf(&pool->s, tmp); - } - - num = POOL_MAX_LOCAL_BUFS - POOL_CHUNK_SIZE; - buf_cache->s.num_buf = num; - ret_local_buf(buf_cache, num, buf_hdr[i]); - } + buffer_free_multi(&buf, 1); }
-void buffer_free_multi(uint32_t pool_id, - const odp_buffer_t buf[], int num_free) +void odp_buffer_free_multi(const odp_buffer_t buf[], int num) { - local_cache_t *buf_cache = local.cache[pool_id]; - uint32_t num; - int i, idx; - pool_entry_t *pool; - odp_buffer_hdr_t *buf_hdr[num_free]; - int multi_pool = 0; - - for (i = 0; i < num_free; i++) { - uint32_t id; - - buf_hdr[i] = odp_buf_to_hdr(buf[i]); - ODP_ASSERT(buf_hdr[i]->allocator != ODP_FREEBUF); - buf_hdr[i]->allocator = ODP_FREEBUF; - id = pool_handle_to_index(buf_hdr[i]->pool_hdl); - multi_pool |= (pool_id != id); - } - - if (odp_unlikely(multi_pool)) { - multi_pool_free(buf_hdr, num_free); - return; - } + buffer_free_multi(buf, num); +}
- num = buf_cache->s.num_buf; +int odp_pool_capability(odp_pool_capability_t *capa) +{ + uint32_t max_len = ODP_CONFIG_PACKET_SEG_LEN_MAX - + ODP_CONFIG_PACKET_HEADROOM - + ODP_CONFIG_PACKET_TAILROOM;
- if (odp_likely((num + num_free) < POOL_MAX_LOCAL_BUFS)) { - ret_local_bufs(buf_cache, num, buf_hdr, num_free); - return; - } + memset(capa, 0, sizeof(odp_pool_capability_t));
- pool = get_pool_entry(pool_id); + capa->max_pools = ODP_CONFIG_POOLS;
- /* Return at least one chunk into the global pool */ - if (odp_unlikely(num_free > POOL_CHUNK_SIZE)) { - for (i = 0; i < num_free; i++) - ret_buf(&pool->s, buf_hdr[i]); + /* Buffer pools */ + capa->buf.max_pools = ODP_CONFIG_POOLS; + capa->buf.max_align = ODP_CONFIG_BUFFER_ALIGN_MAX; + capa->buf.max_size = 0; + capa->buf.max_num = CONFIG_POOL_MAX_NUM;
- return; - } + /* Packet pools */ + capa->pkt.max_pools = ODP_CONFIG_POOLS; + capa->pkt.max_len = ODP_CONFIG_PACKET_MAX_SEGS * max_len; + capa->pkt.max_num = CONFIG_POOL_MAX_NUM; + capa->pkt.min_headroom = ODP_CONFIG_PACKET_HEADROOM; + capa->pkt.min_tailroom = ODP_CONFIG_PACKET_TAILROOM; + capa->pkt.max_segs_per_pkt = ODP_CONFIG_PACKET_MAX_SEGS; + capa->pkt.min_seg_len = max_len; + capa->pkt.max_seg_len = max_len; + capa->pkt.max_uarea_size = 0;
- idx = num - POOL_CHUNK_SIZE; - for (i = 0; i < POOL_CHUNK_SIZE; i++) - ret_buf(&pool->s, buf_cache->s.buf[idx + i]); + /* Timeout pools */ + capa->tmo.max_pools = ODP_CONFIG_POOLS; + capa->tmo.max_num = CONFIG_POOL_MAX_NUM;
- num -= POOL_CHUNK_SIZE; - buf_cache->s.num_buf = num; - ret_local_bufs(buf_cache, num, buf_hdr, num_free); + return 0; }
-void buffer_free(uint32_t pool_id, const odp_buffer_t buf) +void odp_pool_print(odp_pool_t pool_hdl) { - local_cache_t *buf_cache = local.cache[pool_id]; - uint32_t num; - int i; - pool_entry_t *pool; - odp_buffer_hdr_t *buf_hdr; + pool_t *pool;
- buf_hdr = odp_buf_to_hdr(buf); - ODP_ASSERT(buf_hdr->allocator != ODP_FREEBUF); - buf_hdr->allocator = ODP_FREEBUF; - - num = buf_cache->s.num_buf; - - if (odp_likely((num + 1) < POOL_MAX_LOCAL_BUFS)) { - ret_local_bufs(buf_cache, num, &buf_hdr, 1); - return; - } + pool = pool_entry_from_hdl(pool_hdl);
- pool = get_pool_entry(pool_id); - - num -= POOL_CHUNK_SIZE; - for (i = 0; i < POOL_CHUNK_SIZE; i++) - ret_buf(&pool->s, buf_cache->s.buf[num + i]); - - buf_cache->s.num_buf = num; - ret_local_bufs(buf_cache, num, &buf_hdr, 1); + printf("Pool info\n"); + printf("---------\n"); + printf(" pool %" PRIu64 "\n", + odp_pool_to_u64(pool->pool_hdl)); + printf(" name %s\n", pool->name); + printf(" pool type %s\n", + pool->params.type == ODP_POOL_BUFFER ? "buffer" : + (pool->params.type == ODP_POOL_PACKET ? "packet" : + (pool->params.type == ODP_POOL_TIMEOUT ? "timeout" : + "unknown"))); + printf(" pool shm %" PRIu64 "\n", + odp_shm_to_u64(pool->shm)); + printf(" user area shm %" PRIu64 "\n", + odp_shm_to_u64(pool->uarea_shm)); + printf(" num %u\n", pool->num); + printf(" align %u\n", pool->align); + printf(" headroom %u\n", pool->headroom); + printf(" data size %u\n", pool->data_size); + printf(" max data len %u\n", pool->max_len); + printf(" max seg len %u\n", pool->max_seg_len); + printf(" tailroom %u\n", pool->tailroom); + printf(" block size %u\n", pool->block_size); + printf(" uarea size %u\n", pool->uarea_size); + printf(" shm size %u\n", pool->shm_size); + printf(" base addr %p\n", pool->base_addr); + printf(" uarea shm size %u\n", pool->uarea_shm_size); + printf(" uarea base addr %p\n", pool->uarea_base_addr); + printf("\n"); }
-void odp_buffer_free(odp_buffer_t buf) +odp_pool_t odp_buffer_pool(odp_buffer_t buf) { uint32_t pool_id = pool_id_from_buf(buf);
- buffer_free(pool_id, buf); + return pool_index_to_handle(pool_id); }
-void odp_buffer_free_multi(const odp_buffer_t buf[], int num) +void odp_pool_param_init(odp_pool_param_t *params) { - uint32_t pool_id = pool_id_from_buf(buf[0]); + memset(params, 0, sizeof(odp_pool_param_t)); +}
- buffer_free_multi(pool_id, buf, num); +uint64_t odp_pool_to_u64(odp_pool_t hdl) +{ + return _odp_pri(hdl); }
-void odp_pool_print(odp_pool_t pool_hdl) +int seg_alloc_head(odp_buffer_hdr_t *buf_hdr, int segcount) { - pool_entry_t *pool; - uint32_t pool_id; + (void)buf_hdr; + (void)segcount; + return 0; +}
- pool_id = pool_handle_to_index(pool_hdl); - pool = get_pool_entry(pool_id); - - uint32_t bufcount = odp_atomic_load_u32(&pool->s.bufcount); - uint32_t blkcount = odp_atomic_load_u32(&pool->s.blkcount); - uint64_t bufallocs = odp_atomic_load_u64(&pool->s.poolstats.bufallocs); - uint64_t buffrees = odp_atomic_load_u64(&pool->s.poolstats.buffrees); - uint64_t blkallocs = odp_atomic_load_u64(&pool->s.poolstats.blkallocs); - uint64_t blkfrees = odp_atomic_load_u64(&pool->s.poolstats.blkfrees); - uint64_t bufempty = odp_atomic_load_u64(&pool->s.poolstats.bufempty); - uint64_t blkempty = odp_atomic_load_u64(&pool->s.poolstats.blkempty); - uint64_t bufhiwmct = - odp_atomic_load_u64(&pool->s.poolstats.buf_high_wm_count); - uint64_t buflowmct = - odp_atomic_load_u64(&pool->s.poolstats.buf_low_wm_count); - uint64_t blkhiwmct = - odp_atomic_load_u64(&pool->s.poolstats.blk_high_wm_count); - uint64_t blklowmct = - odp_atomic_load_u64(&pool->s.poolstats.blk_low_wm_count); - - ODP_DBG("Pool info\n"); - ODP_DBG("---------\n"); - ODP_DBG(" pool %" PRIu64 "\n", - odp_pool_to_u64(pool->s.pool_hdl)); - ODP_DBG(" name %s\n", - pool->s.flags.has_name ? pool->s.name : "Unnamed Pool"); - ODP_DBG(" pool type %s\n", - pool->s.params.type == ODP_POOL_BUFFER ? "buffer" : - (pool->s.params.type == ODP_POOL_PACKET ? "packet" : - (pool->s.params.type == ODP_POOL_TIMEOUT ? "timeout" : - "unknown"))); - ODP_DBG(" pool storage ODP managed shm handle %" PRIu64 "\n", - odp_shm_to_u64(pool->s.pool_shm)); - ODP_DBG(" pool status %s\n", - pool->s.quiesced ? "quiesced" : "active"); - ODP_DBG(" pool opts %s, %s\n", - pool->s.flags.unsegmented ? "unsegmented" : "segmented", - pool->s.flags.predefined ? "predefined" : "created"); - ODP_DBG(" pool base %p\n", pool->s.pool_base_addr); - ODP_DBG(" pool size %zu (%zu pages)\n", - pool->s.pool_size, pool->s.pool_size / ODP_PAGE_SIZE); - ODP_DBG(" pool mdata base %p\n", pool->s.pool_mdata_addr); - ODP_DBG(" udata size %zu\n", pool->s.udata_size); - ODP_DBG(" headroom %u\n", pool->s.headroom); - ODP_DBG(" tailroom %u\n", pool->s.tailroom); - if (pool->s.params.type == ODP_POOL_BUFFER) { - ODP_DBG(" buf size %zu\n", pool->s.params.buf.size); - ODP_DBG(" buf align %u requested, %u used\n", - pool->s.params.buf.align, pool->s.buf_align); - } else if (pool->s.params.type == ODP_POOL_PACKET) { - ODP_DBG(" seg length %u requested, %u used\n", - pool->s.params.pkt.seg_len, pool->s.seg_size); - ODP_DBG(" pkt length %u requested, %u used\n", - pool->s.params.pkt.len, pool->s.blk_size); - } - ODP_DBG(" num bufs %u\n", pool->s.buf_num); - ODP_DBG(" bufs available %u %s\n", bufcount, - pool->s.buf_low_wm_assert ? " **buf low wm asserted**" : ""); - ODP_DBG(" bufs in use %u\n", pool->s.buf_num - bufcount); - ODP_DBG(" buf allocs %lu\n", bufallocs); - ODP_DBG(" buf frees %lu\n", buffrees); - ODP_DBG(" buf empty %lu\n", bufempty); - ODP_DBG(" blk size %zu\n", - pool->s.seg_size > ODP_MAX_INLINE_BUF ? pool->s.seg_size : 0); - ODP_DBG(" blks available %u %s\n", blkcount, - pool->s.blk_low_wm_assert ? " **blk low wm asserted**" : ""); - ODP_DBG(" blk allocs %lu\n", blkallocs); - ODP_DBG(" blk frees %lu\n", blkfrees); - ODP_DBG(" blk empty %lu\n", blkempty); - ODP_DBG(" buf high wm value %lu\n", pool->s.buf_high_wm); - ODP_DBG(" buf high wm count %lu\n", bufhiwmct); - ODP_DBG(" buf low wm value %lu\n", pool->s.buf_low_wm); - ODP_DBG(" buf low wm count %lu\n", buflowmct); - ODP_DBG(" blk high wm value %lu\n", pool->s.blk_high_wm); - ODP_DBG(" blk high wm count %lu\n", blkhiwmct); - ODP_DBG(" blk low wm value %lu\n", pool->s.blk_low_wm); - ODP_DBG(" blk low wm count %lu\n", blklowmct); +void seg_free_head(odp_buffer_hdr_t *buf_hdr, int segcount) +{ + (void)buf_hdr; + (void)segcount; }
-odp_pool_t odp_buffer_pool(odp_buffer_t buf) +int seg_alloc_tail(odp_buffer_hdr_t *buf_hdr, int segcount) { - uint32_t pool_id = pool_id_from_buf(buf); + (void)buf_hdr; + (void)segcount; + return 0; +}
- return pool_index_to_handle(pool_id); +void seg_free_tail(odp_buffer_hdr_t *buf_hdr, int segcount) +{ + (void)buf_hdr; + (void)segcount; }
-void odp_pool_param_init(odp_pool_param_t *params) +int odp_buffer_is_valid(odp_buffer_t buf) { - memset(params, 0, sizeof(odp_pool_param_t)); + odp_buffer_bits_t handle; + pool_t *pool; + + handle.handle = buf; + + if (handle.pool_id >= ODP_CONFIG_POOLS) + return 0; + + pool = pool_entry(handle.pool_id); + + if (pool->reserved == 0) + return 0; + + return 1; } diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c index 86fb4c1..90ff1fe 100644 --- a/platform/linux-generic/odp_timer.c +++ b/platform/linux-generic/odp_timer.c @@ -29,6 +29,7 @@ #include <unistd.h> #include <sys/syscall.h> #include <inttypes.h> +#include <string.h>
#include <odp/api/align.h> #include <odp_align_internal.h> diff --git a/platform/linux-generic/pktio/socket.c b/platform/linux-generic/pktio/socket.c index e01b0a5..ab25aab 100644 --- a/platform/linux-generic/pktio/socket.c +++ b/platform/linux-generic/pktio/socket.c @@ -46,6 +46,8 @@ #include <protocols/eth.h> #include <protocols/ip.h>
+#define MAX_SEGS ODP_CONFIG_PACKET_MAX_SEGS + static int disable_pktio; /** !0 this pktio disabled, 0 enabled */
static int sock_stats_reset(pktio_entry_t *pktio_entry); @@ -583,20 +585,18 @@ static int sock_mmsg_open(odp_pktio_t id ODP_UNUSED, }
static uint32_t _rx_pkt_to_iovec(odp_packet_t pkt, - struct iovec iovecs[ODP_BUFFER_MAX_SEG]) + struct iovec iovecs[MAX_SEGS]) { odp_packet_seg_t seg = odp_packet_first_seg(pkt); uint32_t seg_count = odp_packet_num_segs(pkt); uint32_t seg_id = 0; uint32_t iov_count = 0; - odp_packet_hdr_t *pkt_hdr = odp_packet_hdr(pkt); uint8_t *ptr; uint32_t seglen;
for (seg_id = 0; seg_id < seg_count; ++seg_id) { - ptr = segment_map(&pkt_hdr->buf_hdr, (odp_buffer_seg_t)seg, - &seglen, pkt_hdr->frame_len, - pkt_hdr->headroom); + ptr = odp_packet_seg_data(pkt, seg); + seglen = odp_packet_seg_data_len(pkt, seg);
if (ptr) { iovecs[iov_count].iov_base = ptr; @@ -692,7 +692,7 @@ static int sock_mmsg_recv(pktio_entry_t *pktio_entry, int index ODP_UNUSED, } } else { struct iovec iovecs[ODP_PACKET_SOCKET_MAX_BURST_RX] - [ODP_BUFFER_MAX_SEG]; + [MAX_SEGS];
for (i = 0; i < (int)len; i++) { int num; @@ -754,7 +754,7 @@ static int sock_mmsg_recv(pktio_entry_t *pktio_entry, int index ODP_UNUSED, }
static uint32_t _tx_pkt_to_iovec(odp_packet_t pkt, - struct iovec iovecs[ODP_BUFFER_MAX_SEG]) + struct iovec iovecs[MAX_SEGS]) { uint32_t pkt_len = odp_packet_len(pkt); uint32_t offset = odp_packet_l2_offset(pkt); @@ -780,7 +780,7 @@ static int sock_mmsg_send(pktio_entry_t *pktio_entry, int index ODP_UNUSED, { pkt_sock_t *pkt_sock = &pktio_entry->s.pkt_sock; struct mmsghdr msgvec[ODP_PACKET_SOCKET_MAX_BURST_TX]; - struct iovec iovecs[ODP_PACKET_SOCKET_MAX_BURST_TX][ODP_BUFFER_MAX_SEG]; + struct iovec iovecs[ODP_PACKET_SOCKET_MAX_BURST_TX][MAX_SEGS]; int ret; int sockfd; int n, i; diff --git a/platform/linux-generic/pktio/socket_mmap.c b/platform/linux-generic/pktio/socket_mmap.c index 9655668..bf4402a 100644 --- a/platform/linux-generic/pktio/socket_mmap.c +++ b/platform/linux-generic/pktio/socket_mmap.c @@ -346,17 +346,15 @@ static inline unsigned pkt_mmap_v2_tx(int sock, struct ring *ring, static void mmap_fill_ring(struct ring *ring, odp_pool_t pool_hdl, int fanout) { int pz = getpagesize(); - uint32_t pool_id; - pool_entry_t *pool_entry; + pool_t *pool;
if (pool_hdl == ODP_POOL_INVALID) ODP_ABORT("Invalid pool handle\n");
- pool_id = pool_handle_to_index(pool_hdl); - pool_entry = get_pool_entry(pool_id); + pool = odp_pool_to_entry(pool_hdl);
/* Frame has to capture full packet which can fit to the pool block.*/ - ring->req.tp_frame_size = (pool_entry->s.blk_size + + ring->req.tp_frame_size = (pool->data_size + TPACKET_HDRLEN + TPACKET_ALIGNMENT + + (pz - 1)) & (-pz);
@@ -364,7 +362,7 @@ static void mmap_fill_ring(struct ring *ring, odp_pool_t pool_hdl, int fanout) * and align size to page boundary. */ ring->req.tp_block_size = (ring->req.tp_frame_size * - pool_entry->s.buf_num + (pz - 1)) & (-pz); + pool->num + (pz - 1)) & (-pz);
if (!fanout) { /* Single socket is in use. Use 1 block with buf_num frames. */ diff --git a/test/common_plat/performance/odp_pktio_perf.c b/test/common_plat/performance/odp_pktio_perf.c index 84ab779..92d979d 100644 --- a/test/common_plat/performance/odp_pktio_perf.c +++ b/test/common_plat/performance/odp_pktio_perf.c @@ -36,7 +36,7 @@
#define TEST_SKIP 77
-#define PKT_BUF_NUM 8192 +#define PKT_BUF_NUM (32 * 1024) #define MAX_NUM_IFACES 2 #define TEST_HDR_MAGIC 0x92749451 #define MAX_WORKERS 32 diff --git a/test/common_plat/performance/odp_scheduling.c b/test/common_plat/performance/odp_scheduling.c index 9407636..e2a49d3 100644 --- a/test/common_plat/performance/odp_scheduling.c +++ b/test/common_plat/performance/odp_scheduling.c @@ -28,7 +28,7 @@ /* GNU lib C */ #include <getopt.h>
-#define MSG_POOL_SIZE (4 * 1024 * 1024) /**< Message pool size */ +#define NUM_MSG (512 * 1024) /**< Number of msg in pool */ #define MAX_ALLOCS 32 /**< Alloc burst size */ #define QUEUES_PER_PRIO 64 /**< Queue per priority */ #define NUM_PRIOS 2 /**< Number of tested priorities */ @@ -868,7 +868,7 @@ int main(int argc, char *argv[]) odp_pool_param_init(¶ms); params.buf.size = sizeof(test_message_t); params.buf.align = 0; - params.buf.num = MSG_POOL_SIZE / sizeof(test_message_t); + params.buf.num = NUM_MSG; params.type = ODP_POOL_BUFFER;
pool = odp_pool_create("msg_pool", ¶ms); @@ -880,8 +880,6 @@ int main(int argc, char *argv[])
globals->pool = pool;
- /* odp_pool_print(pool); */ - /* * Create a queue for plain queue test */ @@ -940,6 +938,8 @@ int main(int argc, char *argv[])
odp_shm_print_all();
+ odp_pool_print(pool); + /* Barrier to sync test case execution */ odp_barrier_init(&globals->barrier, num_workers);
diff --git a/test/common_plat/validation/api/packet/packet.c b/test/common_plat/validation/api/packet/packet.c index c75cde9..87a0662 100644 --- a/test/common_plat/validation/api/packet/packet.c +++ b/test/common_plat/validation/api/packet/packet.c @@ -66,7 +66,12 @@ int packet_suite_init(void) if (odp_pool_capability(&capa) < 0) return -1;
- packet_len = capa.pkt.min_seg_len - PACKET_TAILROOM_RESERVE; + /* Pick a typical packet size and decrement it to the single segment + * limit if needed (min_seg_len maybe equal to max_len + * on some systems). */ + packet_len = 512; + while (packet_len > (capa.pkt.min_seg_len - PACKET_TAILROOM_RESERVE)) + packet_len--;
if (capa.pkt.max_len) { segmented_packet_len = capa.pkt.max_len; @@ -137,6 +142,7 @@ int packet_suite_init(void) udat_size = odp_packet_user_area_size(test_packet); if (!udat || udat_size != sizeof(struct udata_struct)) return -1; + odp_pool_print(packet_pool); memcpy(udat, &test_packet_udata, sizeof(struct udata_struct));
commit 123327606c2dd95a6a85c80e74ad172932195631 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:26 2016 +0200
linux-gen: align: added round up power of two
Added a macro to round up a value to the next power of two, if it's not already a power of two. Also removed duplicated code from the same file.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_align_internal.h b/platform/linux-generic/include/odp_align_internal.h index 9ccde53..d9cd30b 100644 --- a/platform/linux-generic/include/odp_align_internal.h +++ b/platform/linux-generic/include/odp_align_internal.h @@ -29,24 +29,18 @@ extern "C" {
/** * @internal - * Round up pointer 'x' to alignment 'align' - */ -#define ODP_ALIGN_ROUNDUP_PTR(x, align)\ - ((void *)ODP_ALIGN_ROUNDUP((uintptr_t)(x), (uintptr_t)(align))) - -/** - * @internal - * Round up pointer 'x' to cache line size alignment + * Round up 'x' to alignment 'align' */ -#define ODP_CACHE_LINE_SIZE_ROUNDUP_PTR(x)\ - ((void *)ODP_CACHE_LINE_SIZE_ROUNDUP((uintptr_t)(x))) +#define ODP_ALIGN_ROUNDUP(x, align)\ + ((align) * (((x) + (align) - 1) / (align)))
/** * @internal - * Round up 'x' to alignment 'align' + * When 'x' is not already a power of two, round it up to the next + * power of two value. Zero is not supported as an input value. */ -#define ODP_ALIGN_ROUNDUP(x, align)\ - ((align) * (((x) + align - 1) / (align))) +#define ODP_ROUNDUP_POWER_2(x)\ + (1 << (((int)(8 * sizeof(x))) - __builtin_clz((x) - 1)))
/** * @internal @@ -82,20 +76,6 @@ extern "C" {
/** * @internal - * Round down pointer 'x' to 'align' alignment, which is a power of two - */ -#define ODP_ALIGN_ROUNDDOWN_PTR_POWER_2(x, align)\ -((void *)ODP_ALIGN_ROUNDDOWN_POWER_2((uintptr_t)(x), (uintptr_t)(align))) - -/** - * @internal - * Round down pointer 'x' to cache line size alignment - */ -#define ODP_CACHE_LINE_SIZE_ROUNDDOWN_PTR(x)\ - ((void *)ODP_CACHE_LINE_SIZE_ROUNDDOWN((uintptr_t)(x))) - -/** - * @internal * Round down 'x' to 'align' alignment, which is a power of two */ #define ODP_ALIGN_ROUNDDOWN_POWER_2(x, align)\
commit 936ce9f30a85285f70e26038eb5ea8637622fea2 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:25 2016 +0200
linux-gen: ring: created common ring implementation
Moved scheduler ring code into a new header file, so that it can be used also in other parts of the implementation.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 3e29f54..ed5088a 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -131,6 +131,7 @@ noinst_HEADERS = \ ${srcdir}/include/odp_pool_internal.h \ ${srcdir}/include/odp_posix_extensions.h \ ${srcdir}/include/odp_queue_internal.h \ + ${srcdir}/include/odp_ring_internal.h \ ${srcdir}/include/odp_schedule_if.h \ ${srcdir}/include/odp_schedule_internal.h \ ${srcdir}/include/odp_schedule_ordered_internal.h \ diff --git a/platform/linux-generic/include/odp_ring_internal.h b/platform/linux-generic/include/odp_ring_internal.h new file mode 100644 index 0000000..6a6291a --- /dev/null +++ b/platform/linux-generic/include/odp_ring_internal.h @@ -0,0 +1,111 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef ODP_RING_INTERNAL_H_ +#define ODP_RING_INTERNAL_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <odp/api/atomic.h> +#include <odp/api/hints.h> +#include <odp_align_internal.h> + +/* Ring empty, not a valid data value. */ +#define RING_EMPTY ((uint32_t)-1) + +/* Ring of uint32_t data + * + * Ring stores head and tail counters. Ring indexes are formed from these + * counters with a mask (mask = ring_size - 1), which requires that ring size + * must be a power of two. Also ring size must be larger than the maximum + * number of data items that will be stored on it (there's no check against + * overwriting). */ +typedef struct { + /* Writer head and tail */ + odp_atomic_u32_t w_head; + odp_atomic_u32_t w_tail; + uint8_t pad[ODP_CACHE_LINE_SIZE - (2 * sizeof(odp_atomic_u32_t))]; + + /* Reader head and tail */ + odp_atomic_u32_t r_head; + odp_atomic_u32_t r_tail; + + uint32_t data[0]; +} ring_t ODP_ALIGNED_CACHE; + +/* Initialize ring */ +static inline void ring_init(ring_t *ring) +{ + odp_atomic_init_u32(&ring->w_head, 0); + odp_atomic_init_u32(&ring->w_tail, 0); + odp_atomic_init_u32(&ring->r_head, 0); + odp_atomic_init_u32(&ring->r_tail, 0); +} + +/* Dequeue data from the ring head */ +static inline uint32_t ring_deq(ring_t *ring, uint32_t mask) +{ + uint32_t head, tail, new_head; + uint32_t data; + + head = odp_atomic_load_u32(&ring->r_head); + + /* Move reader head. This thread owns data at the new head. */ + do { + tail = odp_atomic_load_u32(&ring->w_tail); + + if (head == tail) + return RING_EMPTY; + + new_head = head + 1; + + } while (odp_unlikely(odp_atomic_cas_acq_u32(&ring->r_head, &head, + new_head) == 0)); + + /* Read queue index */ + data = ring->data[new_head & mask]; + + /* Wait until other readers have updated the tail */ + while (odp_unlikely(odp_atomic_load_acq_u32(&ring->r_tail) != head)) + odp_cpu_pause(); + + /* Now update the reader tail */ + odp_atomic_store_rel_u32(&ring->r_tail, new_head); + + return data; +} + +/* Enqueue data into the ring tail */ +static inline void ring_enq(ring_t *ring, uint32_t mask, uint32_t data) +{ + uint32_t old_head, new_head; + + /* Reserve a slot in the ring for writing */ + old_head = odp_atomic_fetch_inc_u32(&ring->w_head); + new_head = old_head + 1; + + /* Ring is full. Wait for the last reader to finish. */ + while (odp_unlikely(odp_atomic_load_acq_u32(&ring->r_tail) == new_head)) + odp_cpu_pause(); + + /* Write data */ + ring->data[new_head & mask] = data; + + /* Wait until other writers have updated the tail */ + while (odp_unlikely(odp_atomic_load_acq_u32(&ring->w_tail) != old_head)) + odp_cpu_pause(); + + /* Now update the writer tail */ + odp_atomic_store_rel_u32(&ring->w_tail, new_head); +} + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 86b1cec..dfc9555 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -17,12 +17,12 @@ #include <odp/api/hints.h> #include <odp/api/cpu.h> #include <odp/api/thrmask.h> -#include <odp/api/atomic.h> #include <odp_config_internal.h> #include <odp_align_internal.h> #include <odp_schedule_internal.h> #include <odp_schedule_ordered_internal.h> #include <odp/api/sync.h> +#include <odp_ring_internal.h>
/* Number of priority levels */ #define NUM_PRIO 8 @@ -82,9 +82,6 @@ ODP_STATIC_ASSERT((ODP_SCHED_PRIO_NORMAL > 0) && /* Priority queue empty, not a valid queue index. */ #define PRIO_QUEUE_EMPTY ((uint32_t)-1)
-/* Ring empty, not a valid index. */ -#define RING_EMPTY ((uint32_t)-1) - /* For best performance, the number of queues should be a power of two. */ ODP_STATIC_ASSERT(ODP_VAL_IS_POWER_2(ODP_CONFIG_QUEUES), "Number_of_queues_is_not_power_of_two"); @@ -111,28 +108,10 @@ ODP_STATIC_ASSERT((8 * sizeof(pri_mask_t)) >= QUEUES_PER_PRIO, /* Start of named groups in group mask arrays */ #define SCHED_GROUP_NAMED (ODP_SCHED_GROUP_CONTROL + 1)
-/* Scheduler ring - * - * Ring stores head and tail counters. Ring indexes are formed from these - * counters with a mask (mask = ring_size - 1), which requires that ring size - * must be a power of two. */ -typedef struct { - /* Writer head and tail */ - odp_atomic_u32_t w_head; - odp_atomic_u32_t w_tail; - uint8_t pad[ODP_CACHE_LINE_SIZE - (2 * sizeof(odp_atomic_u32_t))]; - - /* Reader head and tail */ - odp_atomic_u32_t r_head; - odp_atomic_u32_t r_tail; - - uint32_t data[0]; -} sched_ring_t ODP_ALIGNED_CACHE; - /* Priority queue */ typedef struct { /* Ring header */ - sched_ring_t ring; + ring_t ring;
/* Ring data: queue indexes */ uint32_t queue_index[PRIO_QUEUE_RING_SIZE]; @@ -142,7 +121,7 @@ typedef struct { /* Packet IO queue */ typedef struct { /* Ring header */ - sched_ring_t ring; + ring_t ring;
/* Ring data: pktio poll command indexes */ uint32_t cmd_index[PKTIO_RING_SIZE]; @@ -205,71 +184,6 @@ __thread sched_local_t sched_local; /* Function prototypes */ static inline void schedule_release_context(void);
-static void ring_init(sched_ring_t *ring) -{ - odp_atomic_init_u32(&ring->w_head, 0); - odp_atomic_init_u32(&ring->w_tail, 0); - odp_atomic_init_u32(&ring->r_head, 0); - odp_atomic_init_u32(&ring->r_tail, 0); -} - -/* Dequeue data from the ring head */ -static inline uint32_t ring_deq(sched_ring_t *ring, uint32_t mask) -{ - uint32_t head, tail, new_head; - uint32_t data; - - head = odp_atomic_load_u32(&ring->r_head); - - /* Move reader head. This thread owns data at the new head. */ - do { - tail = odp_atomic_load_u32(&ring->w_tail); - - if (head == tail) - return RING_EMPTY; - - new_head = head + 1; - - } while (odp_unlikely(odp_atomic_cas_acq_u32(&ring->r_head, &head, - new_head) == 0)); - - /* Read queue index */ - data = ring->data[new_head & mask]; - - /* Wait until other readers have updated the tail */ - while (odp_unlikely(odp_atomic_load_acq_u32(&ring->r_tail) != head)) - odp_cpu_pause(); - - /* Now update the reader tail */ - odp_atomic_store_rel_u32(&ring->r_tail, new_head); - - return data; -} - -/* Enqueue data into the ring tail */ -static inline void ring_enq(sched_ring_t *ring, uint32_t mask, uint32_t data) -{ - uint32_t old_head, new_head; - - /* Reserve a slot in the ring for writing */ - old_head = odp_atomic_fetch_inc_u32(&ring->w_head); - new_head = old_head + 1; - - /* Ring is full. Wait for the last reader to finish. */ - while (odp_unlikely(odp_atomic_load_acq_u32(&ring->r_tail) == new_head)) - odp_cpu_pause(); - - /* Write data */ - ring->data[new_head & mask] = data; - - /* Wait until other writers have updated the tail */ - while (odp_unlikely(odp_atomic_load_acq_u32(&ring->w_tail) != old_head)) - odp_cpu_pause(); - - /* Now update the writer tail */ - odp_atomic_store_rel_u32(&ring->w_tail, new_head); -} - static void sched_local_init(void) { memset(&sched_local, 0, sizeof(sched_local_t)); @@ -347,7 +261,7 @@ static int schedule_term_global(void)
for (i = 0; i < NUM_PRIO; i++) { for (j = 0; j < QUEUES_PER_PRIO; j++) { - sched_ring_t *ring = &sched->prio_q[i][j].ring; + ring_t *ring = &sched->prio_q[i][j].ring; uint32_t qi;
while ((qi = ring_deq(ring, PRIO_QUEUE_MASK)) != @@ -541,7 +455,7 @@ static void schedule_release_atomic(void) if (qi != PRIO_QUEUE_EMPTY && sched_local.num == 0) { int prio = sched->queue[qi].prio; int queue_per_prio = sched->queue[qi].queue_per_prio; - sched_ring_t *ring = &sched->prio_q[prio][queue_per_prio].ring; + ring_t *ring = &sched->prio_q[prio][queue_per_prio].ring;
/* Release current atomic queue */ ring_enq(ring, PRIO_QUEUE_MASK, qi); @@ -636,7 +550,7 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[], int grp; int ordered; odp_queue_t handle; - sched_ring_t *ring; + ring_t *ring;
if (id >= QUEUES_PER_PRIO) id = 0; @@ -747,7 +661,7 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t out_ev[],
for (i = 0; i < PKTIO_CMD_QUEUES; i++, id = ((id + 1) & PKTIO_CMD_QUEUE_MASK)) { - sched_ring_t *ring; + ring_t *ring; uint32_t cmd_index; pktio_cmd_t *cmd;
@@ -1051,7 +965,7 @@ static int schedule_sched_queue(uint32_t queue_index) { int prio = sched->queue[queue_index].prio; int queue_per_prio = sched->queue[queue_index].queue_per_prio; - sched_ring_t *ring = &sched->prio_q[prio][queue_per_prio].ring; + ring_t *ring = &sched->prio_q[prio][queue_per_prio].ring;
sched_local.ignore_ordered_context = 1;
commit ac2d233f2bcfb8d70fa005ae8fa19cce41b4a238 Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:24 2016 +0200
linux-gen: pktio: do not free zero packets
In some error cases, netmap and dpdk pktios were calling odp_packet_free_multi with zero packets. Moved existing error check to avoid a free call with zero packets.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/pktio/dpdk.c b/platform/linux-generic/pktio/dpdk.c index 11f3509..0eb025a 100644 --- a/platform/linux-generic/pktio/dpdk.c +++ b/platform/linux-generic/pktio/dpdk.c @@ -956,10 +956,12 @@ static int dpdk_send(pktio_entry_t *pktio_entry, int index, rte_pktmbuf_free(tx_mbufs[i]); }
- odp_packet_free_multi(pkt_table, tx_pkts); - - if (odp_unlikely(tx_pkts == 0 && __odp_errno != 0)) - return -1; + if (odp_unlikely(tx_pkts == 0)) { + if (__odp_errno != 0) + return -1; + } else { + odp_packet_free_multi(pkt_table, tx_pkts); + }
return tx_pkts; } diff --git a/platform/linux-generic/pktio/netmap.c b/platform/linux-generic/pktio/netmap.c index 412beec..c1cdf72 100644 --- a/platform/linux-generic/pktio/netmap.c +++ b/platform/linux-generic/pktio/netmap.c @@ -830,10 +830,12 @@ static int netmap_send(pktio_entry_t *pktio_entry, int index, if (!pkt_nm->lockless_tx) odp_ticketlock_unlock(&pkt_nm->tx_desc_ring[index].s.lock);
- odp_packet_free_multi(pkt_table, nb_tx); - - if (odp_unlikely(nb_tx == 0 && __odp_errno != 0)) - return -1; + if (odp_unlikely(nb_tx == 0)) { + if (__odp_errno != 0) + return -1; + } else { + odp_packet_free_multi(pkt_table, nb_tx); + }
return nb_tx; }
commit 97f2672e48ea08abd00ed7f3b1fd8420a217779c Author: Petri Savolainen petri.savolainen@nokia.com Date: Mon Nov 21 16:53:23 2016 +0200
linux-gen: ipc: disable build of ipc pktio
IPC pktio implementation depends heavily on pool internals. It's build is disabled due to pool re-implementation. IPC should be re-implemented with a cleaner internal interface towards pool and shm.
Signed-off-by: Petri Savolainen petri.savolainen@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/pktio/ipc.c b/platform/linux-generic/pktio/ipc.c index c1f28db..0e99c6e 100644 --- a/platform/linux-generic/pktio/ipc.c +++ b/platform/linux-generic/pktio/ipc.c @@ -3,7 +3,7 @@ * * SPDX-License-Identifier: BSD-3-Clause */ - +#ifdef _ODP_PKTIO_IPC #include <odp_packet_io_ipc_internal.h> #include <odp_debug_internal.h> #include <odp_packet_io_internal.h> @@ -795,3 +795,4 @@ const pktio_if_ops_t ipc_pktio_ops = { .pktin_ts_from_ns = NULL, .config = NULL }; +#endif
commit d9615289bf6f60bd3a04f8138d5b487efda96a49 Author: Matias Elo matias.elo@nokia.com Date: Fri Oct 14 11:49:12 2016 +0300
linux-gen: timer: fix creating timer pool with no name
Previously trying to create a timer pool with no name (=NULL) caused a segfault.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_timer.c b/platform/linux-generic/odp_timer.c index ee4c4c0..86fb4c1 100644 --- a/platform/linux-generic/odp_timer.c +++ b/platform/linux-generic/odp_timer.c @@ -222,7 +222,7 @@ static inline odp_timer_t tp_idx_to_handle(struct odp_timer_pool_s *tp, static void itimer_init(odp_timer_pool *tp); static void itimer_fini(odp_timer_pool *tp);
-static odp_timer_pool_t odp_timer_pool_new(const char *_name, +static odp_timer_pool_t odp_timer_pool_new(const char *name, const odp_timer_pool_param_t *param) { uint32_t tp_idx = odp_atomic_fetch_add_u32(&num_timer_pools, 1); @@ -238,14 +238,20 @@ static odp_timer_pool_t odp_timer_pool_new(const char *_name, ODP_CACHE_LINE_SIZE); size_t sz2 = ODP_ALIGN_ROUNDUP(sizeof(odp_timer) * param->num_timers, ODP_CACHE_LINE_SIZE); - odp_shm_t shm = odp_shm_reserve(_name, sz0 + sz1 + sz2, + odp_shm_t shm = odp_shm_reserve(name, sz0 + sz1 + sz2, ODP_CACHE_LINE_SIZE, ODP_SHM_SW_ONLY); if (odp_unlikely(shm == ODP_SHM_INVALID)) ODP_ABORT("%s: timer pool shm-alloc(%zuKB) failed\n", - _name, (sz0 + sz1 + sz2) / 1024); + name, (sz0 + sz1 + sz2) / 1024); odp_timer_pool *tp = (odp_timer_pool *)odp_shm_addr(shm); odp_atomic_init_u64(&tp->cur_tick, 0); - snprintf(tp->name, sizeof(tp->name), "%s", _name); + + if (name == NULL) { + tp->name[0] = 0; + } else { + strncpy(tp->name, name, ODP_TIMER_POOL_NAME_LEN - 1); + tp->name[ODP_TIMER_POOL_NAME_LEN - 1] = 0; + } tp->shm = shm; tp->param = *param; tp->min_rel_tck = odp_timer_ns_to_tick(tp, param->min_tmo);
commit b9877d54ec4c2259dec17751f6580f110fd447a5 Author: Matias Elo matias.elo@nokia.com Date: Fri Oct 14 11:49:11 2016 +0300
linux-gen: classification: fix creating cos with no name
Previously trying to create a class-of-service with no name (=NULL) caused a segfault. Fix this and test it in the validation suite.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_classification.c b/platform/linux-generic/odp_classification.c index 82760e8..de72cfb 100644 --- a/platform/linux-generic/odp_classification.c +++ b/platform/linux-generic/odp_classification.c @@ -178,9 +178,14 @@ odp_cos_t odp_cls_cos_create(const char *name, odp_cls_cos_param_t *param) for (i = 0; i < ODP_COS_MAX_ENTRY; i++) { LOCK(&cos_tbl->cos_entry[i].s.lock); if (0 == cos_tbl->cos_entry[i].s.valid) { - strncpy(cos_tbl->cos_entry[i].s.name, name, - ODP_COS_NAME_LEN - 1); - cos_tbl->cos_entry[i].s.name[ODP_COS_NAME_LEN - 1] = 0; + char *cos_name = cos_tbl->cos_entry[i].s.name; + + if (name == NULL) { + cos_name[0] = 0; + } else { + strncpy(cos_name, name, ODP_COS_NAME_LEN - 1); + cos_name[ODP_COS_NAME_LEN - 1] = 0; + } for (j = 0; j < ODP_PMR_PER_COS_MAX; j++) { cos_tbl->cos_entry[i].s.pmr[j] = NULL; cos_tbl->cos_entry[i].s.linked_cos[j] = NULL; diff --git a/test/common_plat/validation/api/classification/odp_classification_basic.c b/test/common_plat/validation/api/classification/odp_classification_basic.c index 372377d..9817287 100644 --- a/test/common_plat/validation/api/classification/odp_classification_basic.c +++ b/test/common_plat/validation/api/classification/odp_classification_basic.c @@ -16,7 +16,6 @@ void classification_test_create_cos(void) odp_cls_cos_param_t cls_param; odp_pool_t pool; odp_queue_t queue; - char cosname[ODP_COS_NAME_LEN];
pool = pool_create("cls_basic_pool"); CU_ASSERT_FATAL(pool != ODP_POOL_INVALID); @@ -24,13 +23,12 @@ void classification_test_create_cos(void) queue = queue_create("cls_basic_queue", true); CU_ASSERT_FATAL(queue != ODP_QUEUE_INVALID);
- sprintf(cosname, "ClassOfService"); odp_cls_cos_param_init(&cls_param); cls_param.pool = pool; cls_param.queue = queue; cls_param.drop_policy = ODP_COS_DROP_POOL;
- cos = odp_cls_cos_create(cosname, &cls_param); + cos = odp_cls_cos_create(NULL, &cls_param); CU_ASSERT(odp_cos_to_u64(cos) != odp_cos_to_u64(ODP_COS_INVALID)); odp_cos_destroy(cos); odp_pool_destroy(pool);
commit ac563d15b95d884764ffa2f48eedce6f5b408fca Author: Matias Elo matias.elo@nokia.com Date: Fri Oct 14 11:49:10 2016 +0300
linux-gen: queue: fix creating queue with no name
Previously trying to create a queue with no name (=NULL) caused a segfault. Fix this and test it in the validation suite.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 8667076..6bf1629 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -64,8 +64,12 @@ queue_entry_t *get_qentry(uint32_t queue_id) static int queue_init(queue_entry_t *queue, const char *name, const odp_queue_param_t *param) { - strncpy(queue->s.name, name, ODP_QUEUE_NAME_LEN - 1); - + if (name == NULL) { + queue->s.name[0] = 0; + } else { + strncpy(queue->s.name, name, ODP_QUEUE_NAME_LEN - 1); + queue->s.name[ODP_QUEUE_NAME_LEN - 1] = 0; + } memcpy(&queue->s.param, param, sizeof(odp_queue_param_t)); if (queue->s.param.sched.lock_count > SCHEDULE_ORDERED_LOCKS_PER_QUEUE) diff --git a/test/common_plat/validation/api/queue/queue.c b/test/common_plat/validation/api/queue/queue.c index dc3a977..1f7913a 100644 --- a/test/common_plat/validation/api/queue/queue.c +++ b/test/common_plat/validation/api/queue/queue.c @@ -137,7 +137,7 @@ void queue_test_mode(void)
void queue_test_param(void) { - odp_queue_t queue; + odp_queue_t queue, null_queue; odp_event_t enev[MAX_BUFFER_QUEUE]; odp_event_t deev[MAX_BUFFER_QUEUE]; odp_buffer_t buf; @@ -173,6 +173,11 @@ void queue_test_param(void) CU_ASSERT(&queue_context == odp_queue_context(queue)); CU_ASSERT(odp_queue_destroy(queue) == 0);
+ /* Create queue with no name */ + odp_queue_param_init(&qparams); + null_queue = odp_queue_create(NULL, &qparams); + CU_ASSERT(ODP_QUEUE_INVALID != null_queue); + /* Plain type queue */ odp_queue_param_init(&qparams); qparams.type = ODP_QUEUE_TYPE_PLAIN; @@ -185,6 +190,9 @@ void queue_test_param(void) CU_ASSERT(ODP_QUEUE_TYPE_PLAIN == odp_queue_type(queue)); CU_ASSERT(&queue_context == odp_queue_context(queue));
+ /* Destroy queue with no name */ + CU_ASSERT(odp_queue_destroy(null_queue) == 0); + msg_pool = odp_pool_lookup("msg_pool"); buf = odp_buffer_alloc(msg_pool); CU_ASSERT_FATAL(buf != ODP_BUFFER_INVALID);
commit 279ecc54b69ad1621fbba837bee31adcc9fd704a Author: Matias Elo matias.elo@nokia.com Date: Fri Oct 14 11:49:09 2016 +0300
linux-gen: schedule: fix creating event group with no name
Previously trying to create an event group with no name (=NULL) caused a segfault. Fix this and test it in the validation suite.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 81e79c9..86b1cec 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -181,6 +181,7 @@ typedef struct { struct { char name[ODP_SCHED_GROUP_NAME_LEN]; odp_thrmask_t mask; + int allocated; } sched_grp[NUM_SCHED_GRPS];
struct { @@ -869,11 +870,19 @@ static odp_schedule_group_t schedule_group_create(const char *name, odp_spinlock_lock(&sched->grp_lock);
for (i = SCHED_GROUP_NAMED; i < NUM_SCHED_GRPS; i++) { - if (sched->sched_grp[i].name[0] == 0) { - strncpy(sched->sched_grp[i].name, name, - ODP_SCHED_GROUP_NAME_LEN - 1); + if (!sched->sched_grp[i].allocated) { + char *grp_name = sched->sched_grp[i].name; + + if (name == NULL) { + grp_name[0] = 0; + } else { + strncpy(grp_name, name, + ODP_SCHED_GROUP_NAME_LEN - 1); + grp_name[ODP_SCHED_GROUP_NAME_LEN - 1] = 0; + } odp_thrmask_copy(&sched->sched_grp[i].mask, mask); group = (odp_schedule_group_t)i; + sched->sched_grp[i].allocated = 1; break; } } @@ -889,10 +898,11 @@ static int schedule_group_destroy(odp_schedule_group_t group) odp_spinlock_lock(&sched->grp_lock);
if (group < NUM_SCHED_GRPS && group >= SCHED_GROUP_NAMED && - sched->sched_grp[group].name[0] != 0) { + sched->sched_grp[group].allocated) { odp_thrmask_zero(&sched->sched_grp[group].mask); memset(sched->sched_grp[group].name, 0, ODP_SCHED_GROUP_NAME_LEN); + sched->sched_grp[group].allocated = 0; ret = 0; } else { ret = -1; @@ -928,7 +938,7 @@ static int schedule_group_join(odp_schedule_group_t group, odp_spinlock_lock(&sched->grp_lock);
if (group < NUM_SCHED_GRPS && group >= SCHED_GROUP_NAMED && - sched->sched_grp[group].name[0] != 0) { + sched->sched_grp[group].allocated) { odp_thrmask_or(&sched->sched_grp[group].mask, &sched->sched_grp[group].mask, mask); @@ -949,7 +959,7 @@ static int schedule_group_leave(odp_schedule_group_t group, odp_spinlock_lock(&sched->grp_lock);
if (group < NUM_SCHED_GRPS && group >= SCHED_GROUP_NAMED && - sched->sched_grp[group].name[0] != 0) { + sched->sched_grp[group].allocated) { odp_thrmask_t leavemask;
odp_thrmask_xor(&leavemask, mask, &sched->mask_all); @@ -973,7 +983,7 @@ static int schedule_group_thrmask(odp_schedule_group_t group, odp_spinlock_lock(&sched->grp_lock);
if (group < NUM_SCHED_GRPS && group >= SCHED_GROUP_NAMED && - sched->sched_grp[group].name[0] != 0) { + sched->sched_grp[group].allocated) { *thrmask = sched->sched_grp[group].mask; ret = 0; } else { @@ -992,7 +1002,7 @@ static int schedule_group_info(odp_schedule_group_t group, odp_spinlock_lock(&sched->grp_lock);
if (group < NUM_SCHED_GRPS && group >= SCHED_GROUP_NAMED && - sched->sched_grp[group].name[0] != 0) { + sched->sched_grp[group].allocated) { info->name = sched->sched_grp[group].name; info->thrmask = sched->sched_grp[group].mask; ret = 0; diff --git a/platform/linux-generic/odp_schedule_sp.c b/platform/linux-generic/odp_schedule_sp.c index 879eb5c..8b355da 100644 --- a/platform/linux-generic/odp_schedule_sp.c +++ b/platform/linux-generic/odp_schedule_sp.c @@ -490,8 +490,15 @@ static odp_schedule_group_t schedule_group_create(const char *name,
for (i = NUM_STATIC_GROUP; i < NUM_GROUP; i++) { if (!sched_group->s.group[i].allocated) { - strncpy(sched_group->s.group[i].name, name, - ODP_SCHED_GROUP_NAME_LEN); + char *grp_name = sched_group->s.group[i].name; + + if (name == NULL) { + grp_name[0] = 0; + } else { + strncpy(grp_name, name, + ODP_SCHED_GROUP_NAME_LEN - 1); + grp_name[ODP_SCHED_GROUP_NAME_LEN - 1] = 0; + } odp_thrmask_copy(&sched_group->s.group[i].mask, thrmask); sched_group->s.group[i].allocated = 1; diff --git a/test/common_plat/validation/api/scheduler/scheduler.c b/test/common_plat/validation/api/scheduler/scheduler.c index 734135e..952561c 100644 --- a/test/common_plat/validation/api/scheduler/scheduler.c +++ b/test/common_plat/validation/api/scheduler/scheduler.c @@ -273,7 +273,7 @@ void scheduler_test_groups(void) ODP_SCHED_SYNC_ORDERED}; int thr_id = odp_thread_id(); odp_thrmask_t zeromask, mymask, testmask; - odp_schedule_group_t mygrp1, mygrp2, lookup; + odp_schedule_group_t mygrp1, mygrp2, null_grp, lookup; odp_schedule_group_info_t info;
odp_thrmask_zero(&zeromask); @@ -327,6 +327,10 @@ void scheduler_test_groups(void) CU_ASSERT(rc == 0); CU_ASSERT(!odp_thrmask_isset(&testmask, thr_id));
+ /* Create group with no name */ + null_grp = odp_schedule_group_create(NULL, &zeromask); + CU_ASSERT(null_grp != ODP_SCHED_GROUP_INVALID); + /* We shouldn't be able to find our second group before creating it */ lookup = odp_schedule_group_lookup("Test Group 2"); CU_ASSERT(lookup == ODP_SCHED_GROUP_INVALID); @@ -338,6 +342,9 @@ void scheduler_test_groups(void) lookup = odp_schedule_group_lookup("Test Group 2"); CU_ASSERT(lookup == mygrp2);
+ /* Destroy group with no name */ + CU_ASSERT_FATAL(odp_schedule_group_destroy(null_grp) == 0); + /* Verify we're not part of it */ rc = odp_schedule_group_thrmask(mygrp2, &testmask); CU_ASSERT(rc == 0);
commit ad7f8f4ea11a8e40d853cd9b2b0bc3e6f7876a8b Author: Matias Elo matias.elo@nokia.com Date: Fri Oct 14 11:49:08 2016 +0300
api: improve name argument definitions in *_create() functions
The current APIs don't always define valid name argument values. Fix this by stating when NULL is a valid value and when the name string doesn't have to be unique.
Signed-off-by: Matias Elo matias.elo@nokia.com Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/classification.h b/include/odp/api/spec/classification.h index 189c91f..0e442c7 100644 --- a/include/odp/api/spec/classification.h +++ b/include/odp/api/spec/classification.h @@ -193,12 +193,14 @@ int odp_cls_capability(odp_cls_capability_t *capability); /** * Create a class-of-service * - * @param name String intended for debugging purposes. + * The use of class-of-service name is optional. Unique names are not required. * - * @param param class of service parameters + * @param name Name of the class-of-service or NULL. Maximum string + * length is ODP_COS_NAME_LEN. + * @param param Class-of-service parameters * - * @retval class of service handle - * @retval ODP_COS_INVALID on failure. + * @retval Class-of-service handle + * @retval ODP_COS_INVALID on failure. * * @note ODP_QUEUE_INVALID and ODP_POOL_INVALID are valid values for queue * and pool associated with a class of service and when any one of these values diff --git a/include/odp/api/spec/pool.h b/include/odp/api/spec/pool.h index c80c98a..a1331e3 100644 --- a/include/odp/api/spec/pool.h +++ b/include/odp/api/spec/pool.h @@ -220,14 +220,12 @@ typedef struct odp_pool_param_t { /** * Create a pool * - * This routine is used to create a pool. It take two arguments: the optional - * name of the pool to be created and a parameter struct that describes the - * pool to be created. If a name is not specified the result is an anonymous - * pool that cannot be referenced by odp_pool_lookup(). - * - * @param name Name of the pool, max ODP_POOL_NAME_LEN-1 chars. - * May be specified as NULL for anonymous pools. + * This routine is used to create a pool. The use of pool name is optional. + * Unique names are not required. However, odp_pool_lookup() returns only a + * single matching pool. * + * @param name Name of the pool or NULL. Maximum string length is + * ODP_POOL_NAME_LEN. * @param params Pool parameters. * * @return Handle of the created pool @@ -256,11 +254,8 @@ int odp_pool_destroy(odp_pool_t pool); * * @param name Name of the pool * - * @return Handle of found pool + * @return Handle of the first matching pool * @retval ODP_POOL_INVALID Pool could not be found - * - * @note This routine cannot be used to look up an anonymous pool (one created - * with no name). */ odp_pool_t odp_pool_lookup(const char *name);
diff --git a/include/odp/api/spec/queue.h b/include/odp/api/spec/queue.h index 31dc9f5..b0c5e31 100644 --- a/include/odp/api/spec/queue.h +++ b/include/odp/api/spec/queue.h @@ -173,9 +173,12 @@ typedef struct odp_queue_param_t { * Create a queue according to the queue parameters. Queue type is specified by * queue parameter 'type'. Use odp_queue_param_init() to initialize parameters * into their default values. Default values are also used when 'param' pointer - * is NULL. The default queue type is ODP_QUEUE_TYPE_PLAIN. + * is NULL. The default queue type is ODP_QUEUE_TYPE_PLAIN. The use of queue + * name is optional. Unique names are not required. However, odp_queue_lookup() + * returns only a single matching queue. * - * @param name Queue name + * @param name Name of the queue or NULL. Maximum string length is + * ODP_QUEUE_NAME_LEN. * @param param Queue parameters. Uses defaults if NULL. * * @return Queue handle @@ -203,7 +206,7 @@ int odp_queue_destroy(odp_queue_t queue); * * @param name Queue name * - * @return Queue handle + * @return Handle of the first matching queue * @retval ODP_QUEUE_INVALID on failure */ odp_queue_t odp_queue_lookup(const char *name); diff --git a/include/odp/api/spec/schedule.h b/include/odp/api/spec/schedule.h index d924da2..f976a4c 100644 --- a/include/odp/api/spec/schedule.h +++ b/include/odp/api/spec/schedule.h @@ -214,10 +214,12 @@ int odp_schedule_num_prio(void); * mask will receive events from a queue that belongs to the schedule group. * Thread masks of various schedule groups may overlap. There are predefined * groups such as ODP_SCHED_GROUP_ALL and ODP_SCHED_GROUP_WORKER, which are - * always present and automatically updated. Group name is optional - * (may be NULL) and can have ODP_SCHED_GROUP_NAME_LEN characters in maximum. + * always present and automatically updated. The use of group name is optional. + * Unique names are not required. However, odp_schedule_group_lookup() returns + * only a single matching group. * - * @param name Schedule group name + * @param name Name of the schedule group or NULL. Maximum string length is + * ODP_SCHED_GROUP_NAME_LEN. * @param mask Thread mask * * @return Schedule group handle @@ -245,11 +247,9 @@ int odp_schedule_group_destroy(odp_schedule_group_t group); /** * Look up a schedule group by name * - * Return the handle of a schedule group from its name - * * @param name Name of schedule group * - * @return Handle of schedule group for specified name + * @return Handle of the first matching schedule group * @retval ODP_SCHEDULE_GROUP_INVALID No matching schedule group found */ odp_schedule_group_t odp_schedule_group_lookup(const char *name); diff --git a/include/odp/api/spec/timer.h b/include/odp/api/spec/timer.h index df37189..49221c4 100644 --- a/include/odp/api/spec/timer.h +++ b/include/odp/api/spec/timer.h @@ -108,7 +108,10 @@ typedef struct { /** * Create a timer pool * - * @param name Name of the timer pool. The string will be copied. + * The use of pool name is optional. Unique names are not required. + * + * @param name Name of the timer pool or NULL. Maximum string length is + * ODP_TIMER_POOL_NAME_LEN. * @param params Timer pool parameters. The content will be copied. * * @return Timer pool handle on success
commit 50cfbff244a1fa29314520eb9ca9bdf5df445df6 Author: Christophe Milard christophe.milard@linaro.org Date: Tue Sep 13 14:40:46 2016 +0200
linux-generic: _fdserver: allocating data table dynamicaly
The table containing the saved file-descriptors<->{context, key} couples is now dynamically malloc'd in the fd server process, hence avoiding the memory waste which happened in other process when the table was staticaly reserved in all processes.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_fdserver.c b/platform/linux-generic/_fdserver.c index 97661d0..41a630b 100644 --- a/platform/linux-generic/_fdserver.c +++ b/platform/linux-generic/_fdserver.c @@ -73,7 +73,7 @@ typedef struct fdentry_s { uint64_t key; int fd; } fdentry_t; -static fdentry_t fd_table[FDSERVER_MAX_ENTRIES]; +static fdentry_t *fd_table; static int fd_table_nb_entries;
/* @@ -622,8 +622,20 @@ int _odp_fdserver_init_global(void) /* TODO: pin the server on appropriate service cpu mask */ /* when (if) we can agree on the usage of service mask */
+ /* allocate the space for the file descriptor<->key table: */ + fd_table = malloc(FDSERVER_MAX_ENTRIES * sizeof(fdentry_t)); + if (!fd_table) { + ODP_ERR("maloc failed!\n"); + exit(1); + } + + /* wait for clients requests */ wait_requests(sock); /* Returns when server is stopped */ close(sock); + + /* release the file descriptor table: */ + free(fd_table); + exit(0); }
commit 234dd2f623d73b069e43282baf96b48473d0ef1c Author: Christophe Milard christophe.milard@linaro.org Date: Tue Sep 13 12:05:18 2016 +0200
linux-generic: _fdserver: fixing comment typo
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/_fdserver.c b/platform/linux-generic/_fdserver.c index bf36eb2..97661d0 100644 --- a/platform/linux-generic/_fdserver.c +++ b/platform/linux-generic/_fdserver.c @@ -103,7 +103,7 @@ typedef struct fd_server_msg { * Send a fdserver_msg, possibly including a file descriptor, on the socket * This function is used both by: * -the client (sending a FD_REGISTER_REQ with a file descriptor to be shared, - * or FD_LOOKUP_REQ/FD_DEREGISTER_REQ without a file descirptor) + * or FD_LOOKUP_REQ/FD_DEREGISTER_REQ without a file descriptor) * -the server (sending FD_REGISTER_ACK/NACK, FD_LOOKUP_NACK, * FD_DEREGISTER_ACK/NACK... without a fd or a * FD_LOOKUP_ACK with a fd) @@ -165,7 +165,7 @@ static int send_fdserver_msg(int sock, int command, * given socket. * This function is used both by: * -the server (receiving a FD_REGISTER_REQ with a file descriptor to be shared, - * or FD_LOOKUP_REQ, FD_DEREGISTER_REQ without a file descirptor) + * or FD_LOOKUP_REQ, FD_DEREGISTER_REQ without a file descriptor) * -the client (receiving FD_REGISTER_ACK...without a fd or a FD_LOOKUP_ACK with * a fd) * This function make use of the ancillary data (control data) to pass and
commit b93fd7af775a04c50a064a241d82ba3b7bf999f7 Author: Christophe Milard christophe.milard@linaro.org Date: Sat Aug 20 09:45:57 2016 +0200
linux-generic: system_info: adding huge page dir
The Huge page information is separated and a function to get the huge page mount directory is added. This function is called at init so the information is available later on.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Brian Brooks brian.brooks@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/include/odp_internal.h b/platform/linux-generic/include/odp_internal.h index e221435..6063b0f 100644 --- a/platform/linux-generic/include/odp_internal.h +++ b/platform/linux-generic/include/odp_internal.h @@ -29,7 +29,6 @@ extern __thread int __odp_errno;
typedef struct { uint64_t cpu_hz_max[MAX_CPU_NUMBER]; - uint64_t default_huge_page_size; uint64_t page_size; int cache_line_size; int cpu_count; @@ -37,11 +36,17 @@ typedef struct { char model_str[MAX_CPU_NUMBER][128]; } system_info_t;
+typedef struct { + uint64_t default_huge_page_size; + char *default_huge_page_dir; +} hugepage_info_t; + struct odp_global_data_s { pid_t main_pid; odp_log_func_t log_fn; odp_abort_func_t abort_fn; system_info_t system_info; + hugepage_info_t hugepage_info; odp_cpumask_t control_cpus; odp_cpumask_t worker_cpus; int num_cpus_installed; diff --git a/platform/linux-generic/odp_system_info.c b/platform/linux-generic/odp_system_info.c index bbe5358..18c61db 100644 --- a/platform/linux-generic/odp_system_info.c +++ b/platform/linux-generic/odp_system_info.c @@ -4,6 +4,13 @@ * SPDX-License-Identifier: BSD-3-Clause */
+/* + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + */ + #include <odp_posix_extensions.h>
#include <odp/api/system_info.h> @@ -11,11 +18,13 @@ #include <odp_debug_internal.h> #include <odp/api/align.h> #include <odp/api/cpu.h> +#include <errno.h> #include <pthread.h> #include <sched.h> #include <string.h> #include <stdio.h> #include <inttypes.h> +#include <ctype.h>
/* sysconf */ #include <unistd.h> @@ -97,6 +106,158 @@ static uint64_t default_huge_page_size(void) }
/* + * split string into tokens. largely "inspired" by dpdk: + * lib/librte_eal/common/eal_common_string_fns.c: rte_strsplit + */ +static int strsplit(char *string, int stringlen, + char **tokens, int maxtokens, char delim) +{ + int i, tok = 0; + int tokstart = 1; /* first token is right at start of string */ + + if (string == NULL || tokens == NULL) + return -1; + + for (i = 0; i < stringlen; i++) { + if (string[i] == '\0' || tok >= maxtokens) + break; + if (tokstart) { + tokstart = 0; + tokens[tok++] = &string[i]; + } + if (string[i] == delim) { + string[i] = '\0'; + tokstart = 1; + } + } + return tok; +} + +/* + * Converts a numeric string to the equivalent uint64_t value. + * As well as straight number conversion, also recognises the suffixes + * k, m and g for kilobytes, megabytes and gigabytes respectively. + * + * If a negative number is passed in i.e. a string with the first non-black + * character being "-", zero is returned. Zero is also returned in the case of + * an error with the strtoull call in the function. + * largely "inspired" by dpdk: + * lib/librte_eal/common/include/rte_common.h: rte_str_to_size + * + * param str + * String containing number to convert. + * return + * Number. + */ +static inline uint64_t str_to_size(const char *str) +{ + char *endptr; + unsigned long long size; + + while (isspace((int)*str)) + str++; + if (*str == '-') + return 0; + + errno = 0; + size = strtoull(str, &endptr, 0); + if (errno) + return 0; + + if (*endptr == ' ') + endptr++; /* allow 1 space gap */ + + switch (*endptr) { + case 'G': + case 'g': + size *= 1024; /* fall-through */ + case 'M': + case 'm': + size *= 1024; /* fall-through */ + case 'K': + case 'k': + size *= 1024; /* fall-through */ + default: + break; + } + return size; +} + +/* + * returns a malloced string containing the name of the directory for + * huge pages of a given size (0 for default) + * largely "inspired" by dpdk: + * lib/librte_eal/linuxapp/eal/eal_hugepage_info.c: get_hugepage_dir + * + * Analysis of /proc/mounts + */ +static char *get_hugepage_dir(uint64_t hugepage_sz) +{ + enum proc_mount_fieldnames { + DEVICE = 0, + MOUNTPT, + FSTYPE, + OPTIONS, + _FIELDNAME_MAX + }; + static uint64_t default_size; + const char proc_mounts[] = "/proc/mounts"; + const char hugetlbfs_str[] = "hugetlbfs"; + const size_t htlbfs_str_len = sizeof(hugetlbfs_str) - 1; + const char pagesize_opt[] = "pagesize="; + const size_t pagesize_opt_len = sizeof(pagesize_opt) - 1; + const char split_tok = ' '; + char *tokens[_FIELDNAME_MAX]; + char buf[BUFSIZ]; + char *retval = NULL; + const char *pagesz_str; + uint64_t pagesz; + FILE *fd = fopen(proc_mounts, "r"); + + if (fd == NULL) + return NULL; + + if (default_size == 0) + default_size = default_huge_page_size(); + + if (hugepage_sz == 0) + hugepage_sz = default_size; + + while (fgets(buf, sizeof(buf), fd)) { + if (strsplit(buf, sizeof(buf), tokens, + _FIELDNAME_MAX, split_tok) != _FIELDNAME_MAX) { + ODP_ERR("Error parsing %s\n", proc_mounts); + break; /* return NULL */ + } + + /* is this hugetlbfs? */ + if (!strncmp(tokens[FSTYPE], hugetlbfs_str, htlbfs_str_len)) { + pagesz_str = strstr(tokens[OPTIONS], pagesize_opt); + + /* No explicit size, default page size is compared */ + if (pagesz_str == NULL) { + if (hugepage_sz == default_size) { + retval = strdup(tokens[MOUNTPT]); + break; + } + } + /* there is an explicit page size, so check it */ + else { + pagesz = + str_to_size(&pagesz_str[pagesize_opt_len]); + if (pagesz == hugepage_sz) { + retval = strdup(tokens[MOUNTPT]); + break; + } + } + } /* end if strncmp hugetlbfs */ + } /* end while fgets */ + + fclose(fd); + return retval; +} + +/* * Analysis of /sys/devices/system/cpu/ files */ static int systemcpu(system_info_t *sysinfo) @@ -125,11 +286,21 @@ static int systemcpu(system_info_t *sysinfo) return -1; }
- sysinfo->default_huge_page_size = default_huge_page_size(); - return 0; }
+/* + * Huge page information + */ +static int system_hp(hugepage_info_t *hugeinfo) +{ + hugeinfo->default_huge_page_size = default_huge_page_size(); + + /* default_huge_page_dir may be NULL if no huge page support */ + hugeinfo->default_huge_page_dir = get_hugepage_dir(0); + + return 0; +}
/* * System info initialisation @@ -157,6 +328,8 @@ int odp_system_info_init(void) return -1; }
+ system_hp(&odp_global_data.hugepage_info); + return 0; }
@@ -165,6 +338,8 @@ int odp_system_info_init(void) */ int odp_system_info_term(void) { + free(odp_global_data.hugepage_info.default_huge_page_dir); + return 0; }
@@ -200,7 +375,7 @@ uint64_t odp_cpu_hz_max_id(int id)
uint64_t odp_sys_huge_page_size(void) { - return odp_global_data.system_info.default_huge_page_size; + return odp_global_data.hugepage_info.default_huge_page_size; }
uint64_t odp_sys_page_size(void)
commit 1e1312c15e96be77eccc0fb8e3aa35d4c7da72f6 Author: Christophe Milard christophe.milard@linaro.org Date: Sat Aug 20 09:45:56 2016 +0200
linux-gen: fdserver: new fdserver added
A fdserver is added and started at init time. The role of the fdserver (file descriptor server) is to enable sharing of file descriptors between unrelated processes: processes which wish to share a file descriptor may register their files descriptor to the server and processes wishing to use the shared file descriptors can do a lookup. When registration occurs, a triple {context, key, fd} is provided to the server. The context identifies the client and scope (i.e. shmem). The key is implemented as a long, and can be whatever. The server won't care as long as keys are unique. The file descriptor can be retrieved by another process providing the same context and key. Files descriptors passed this way are converted on the fly during the unix domain socket communication that occurs between the server and its clients. This is done by using the ancillary control data part of the msg.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Brian Brooks brian.brooks@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 8494569..3e29f54 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -104,6 +104,7 @@ odpdrvinclude_HEADERS = \ $(srcdir)/include/odp/drv/compiler.h
noinst_HEADERS = \ + ${srcdir}/include/_fdserver_internal.h \ ${srcdir}/include/odp_align_internal.h \ ${srcdir}/include/odp_atomic_internal.h \ ${srcdir}/include/odp_buffer_inlines.h \ @@ -146,6 +147,7 @@ noinst_HEADERS = \ ${srcdir}/Makefile.inc
__LIB__libodp_linux_la_SOURCES = \ + _fdserver.c \ odp_atomic.c \ odp_barrier.c \ odp_buffer.c \ diff --git a/platform/linux-generic/_fdserver.c b/platform/linux-generic/_fdserver.c new file mode 100644 index 0000000..bf36eb2 --- /dev/null +++ b/platform/linux-generic/_fdserver.c @@ -0,0 +1,655 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/* + * This file implements a file descriptor sharing server enabling + * sharing of file descriptors between processes, regardless of fork time. + * + * File descriptors are process scoped, but they can be "sent and converted + * on the fly" between processes using special unix domain socket ancillary + * data. + * The receiving process gets a file descriptor "pointing" to the same thing + * as the one sent (but the value of the file descriptor itself may be different + * from the one sent). + * Because ODP applications are responsible for creating ODP threads (i.e. + * pthreads or linux processes), ODP has no control on the order things happen: + * Nothing prevent a thread A to fork B and C, and then C creating a pktio + * which will be used by A and B to send/receive packets. + * Assuming this pktio uses a file descriptor, the latter will need to be + * shared between the processes, despite the "non convenient" fork time. + * The shared memory allocator is likely to use this as well to be able to + * share memory regardless of fork() time. + * This server handles a table of {(context,key)<-> fd} pair, and is + * interfaced by the following functions: + * + * _odp_fdserver_register_fd(context, key, fd_to_send); + * _odp_fdserver_deregister_fd(context, key); + * _odp_fdserver_lookup_fd(context, key); + * + * which are used to register/deregister or querry for file descriptor based + * on a context and key value couple, which has to be unique. + * + * Note again that the file descriptors stored here are local to this server + * process and get converted both when registered or looked up. + */ + +#include <odp_posix_extensions.h> +#include <odp/api/spinlock.h> +#include <odp_internal.h> +#include <odp_debug_internal.h> +#include <_fdserver_internal.h> + +#include <stdio.h> +#include <stdlib.h> +#include <errno.h> +#include <string.h> +#include <sys/types.h> +#include <signal.h> +#include <sys/socket.h> +#include <sys/un.h> +#include <unistd.h> +#include <inttypes.h> +#include <sys/mman.h> +#include <sys/wait.h> + +#define FDSERVER_SOCKPATH_MAXLEN 32 +#define FDSERVER_SOCKPATH_FORMAT "/tmp/odp-%d-fdserver" +#define FDSERVER_BACKLOG 5 + +#ifndef MAP_ANONYMOUS +#define MAP_ANONYMOUS MAP_ANON +#endif + +/* when accessing the client functions, clients should be mutexed: */ +odp_spinlock_t *client_lock; + +/* define the tables of file descriptors handled by this server: */ +#define FDSERVER_MAX_ENTRIES 256 +typedef struct fdentry_s { + fd_server_context_e context; + uint64_t key; + int fd; +} fdentry_t; +static fdentry_t fd_table[FDSERVER_MAX_ENTRIES]; +static int fd_table_nb_entries; + +/* + * define the message struct used for communication between client and server + * (this single message is used in both direction) + * The file descriptors are sent out of band as ancillary data for conversion. + */ +typedef struct fd_server_msg { + int command; + fd_server_context_e context; + uint64_t key; +} fdserver_msg_t; +/* possible commands are: */ +#define FD_REGISTER_REQ 1 /* client -> server */ +#define FD_REGISTER_ACK 2 /* server -> client */ +#define FD_REGISTER_NACK 3 /* server -> client */ +#define FD_LOOKUP_REQ 4 /* client -> server */ +#define FD_LOOKUP_ACK 5 /* server -> client */ +#define FD_LOOKUP_NACK 6 /* server -> client */ +#define FD_DEREGISTER_REQ 7 /* client -> server */ +#define FD_DEREGISTER_ACK 8 /* server -> client */ +#define FD_DEREGISTER_NACK 9 /* server -> client */ +#define FD_SERVERSTOP_REQ 10 /* client -> server (stops) */ + +/* + * Client and server function: + * Send a fdserver_msg, possibly including a file descriptor, on the socket + * This function is used both by: + * -the client (sending a FD_REGISTER_REQ with a file descriptor to be shared, + * or FD_LOOKUP_REQ/FD_DEREGISTER_REQ without a file descirptor) + * -the server (sending FD_REGISTER_ACK/NACK, FD_LOOKUP_NACK, + * FD_DEREGISTER_ACK/NACK... without a fd or a + * FD_LOOKUP_ACK with a fd) + * This function make use of the ancillary data (control data) to pass and + * convert file descriptors over UNIX sockets + * Return -1 on error, 0 on success. + */ +static int send_fdserver_msg(int sock, int command, + fd_server_context_e context, uint64_t key, + int fd_to_send) +{ + struct msghdr socket_message; + struct iovec io_vector[1]; /* one msg frgmt only */ + struct cmsghdr *control_message = NULL; + int *fd_location; + fdserver_msg_t msg; + int res; + + char ancillary_data[CMSG_SPACE(sizeof(int))]; + + /* prepare the register request body (single framgent): */ + msg.command = command; + msg.context = context; + msg.key = key; + io_vector[0].iov_base = &msg; + io_vector[0].iov_len = sizeof(fdserver_msg_t); + + /* initialize socket message */ + memset(&socket_message, 0, sizeof(struct msghdr)); + socket_message.msg_iov = io_vector; + socket_message.msg_iovlen = 1; + + if (fd_to_send >= 0) { + /* provide space for the ancillary data */ + memset(ancillary_data, 0, CMSG_SPACE(sizeof(int))); + socket_message.msg_control = ancillary_data; + socket_message.msg_controllen = CMSG_SPACE(sizeof(int)); + + /* initialize a single ancillary data element for fd passing */ + control_message = CMSG_FIRSTHDR(&socket_message); + control_message->cmsg_level = SOL_SOCKET; + control_message->cmsg_type = SCM_RIGHTS; + control_message->cmsg_len = CMSG_LEN(sizeof(int)); + fd_location = (int *)(void *)CMSG_DATA(control_message); + *fd_location = fd_to_send; + } + res = sendmsg(sock, &socket_message, 0); + if (res < 0) { + ODP_ERR("send_fdserver_msg: %s\n", strerror(errno)); + return(-1); + } + + return 0; +} + +/* + * Client and server function + * Receive a fdserver_msg, possibly including a file descriptor, on the + * given socket. + * This function is used both by: + * -the server (receiving a FD_REGISTER_REQ with a file descriptor to be shared, + * or FD_LOOKUP_REQ, FD_DEREGISTER_REQ without a file descirptor) + * -the client (receiving FD_REGISTER_ACK...without a fd or a FD_LOOKUP_ACK with + * a fd) + * This function make use of the ancillary data (control data) to pass and + * convert file descriptors over UNIX sockets. + * Return -1 on error, 0 on success. + */ +static int recv_fdserver_msg(int sock, int *command, + fd_server_context_e *context, uint64_t *key, + int *recvd_fd) +{ + struct msghdr socket_message; + struct iovec io_vector[1]; /* one msg frgmt only */ + struct cmsghdr *control_message = NULL; + int *fd_location; + fdserver_msg_t msg; + char ancillary_data[CMSG_SPACE(sizeof(int))]; + + memset(&socket_message, 0, sizeof(struct msghdr)); + memset(ancillary_data, 0, CMSG_SPACE(sizeof(int))); + + /* setup a place to fill in message contents */ + io_vector[0].iov_base = &msg; + io_vector[0].iov_len = sizeof(fdserver_msg_t); + socket_message.msg_iov = io_vector; + socket_message.msg_iovlen = 1; + + /* provide space for the ancillary data */ + socket_message.msg_control = ancillary_data; + socket_message.msg_controllen = CMSG_SPACE(sizeof(int)); + + /* receive the message */ + if (recvmsg(sock, &socket_message, MSG_CMSG_CLOEXEC) < 0) { + ODP_ERR("recv_fdserver_msg: %s\n", strerror(errno)); + return(-1); + } + + *command = msg.command; + *context = msg.context; + *key = msg.key; + + /* grab the converted file descriptor (if any) */ + *recvd_fd = -1; + + if ((socket_message.msg_flags & MSG_CTRUNC) == MSG_CTRUNC) + return 0; + + /* iterate ancillary elements to find the file descriptor: */ + for (control_message = CMSG_FIRSTHDR(&socket_message); + control_message != NULL; + control_message = CMSG_NXTHDR(&socket_message, control_message)) { + if ((control_message->cmsg_level == SOL_SOCKET) && + (control_message->cmsg_type == SCM_RIGHTS)) { + fd_location = (int *)(void *)CMSG_DATA(control_message); + *recvd_fd = *fd_location; + break; + } + } + + return 0; +} + +/* opens and returns a connected socket to the server */ +static int get_socket(void) +{ + char sockpath[FDSERVER_SOCKPATH_MAXLEN]; + int s_sock; /* server socket */ + struct sockaddr_un remote; + int len; + + /* construct the named socket path: */ + snprintf(sockpath, FDSERVER_SOCKPATH_MAXLEN, FDSERVER_SOCKPATH_FORMAT, + odp_global_data.main_pid); + + s_sock = socket(AF_UNIX, SOCK_STREAM, 0); + if (s_sock == -1) { + ODP_ERR("cannot connect to server: %s\n", strerror(errno)); + return(-1); + } + + remote.sun_family = AF_UNIX; + strcpy(remote.sun_path, sockpath); + len = strlen(remote.sun_path) + sizeof(remote.sun_family); + if (connect(s_sock, (struct sockaddr *)&remote, len) == -1) { + ODP_ERR("cannot connect to server: %s\n", strerror(errno)); + close(s_sock); + return(-1); + } + + return s_sock; +} + +/* + * Client function: + * Register a file descriptor to the server. Return -1 on error. + */ +int _odp_fdserver_register_fd(fd_server_context_e context, uint64_t key, + int fd_to_send) +{ + int s_sock; /* server socket */ + int res; + int command; + int fd; + + odp_spinlock_lock(client_lock); + + ODP_DBG("FD client register: pid=%d key=%" PRIu64 ", fd=%d\n", + getpid(), key, fd_to_send); + + s_sock = get_socket(); + if (s_sock < 0) { + odp_spinlock_unlock(client_lock); + return(-1); + } + + res = send_fdserver_msg(s_sock, FD_REGISTER_REQ, context, key, + fd_to_send); + if (res < 0) { + ODP_ERR("fd registration failure\n"); + close(s_sock); + odp_spinlock_unlock(client_lock); + return -1; + } + + res = recv_fdserver_msg(s_sock, &command, &context, &key, &fd); + + if ((res < 0) || (command != FD_REGISTER_ACK)) { + ODP_ERR("fd registration failure\n"); + close(s_sock); + odp_spinlock_unlock(client_lock); + return -1; + } + + close(s_sock); + + odp_spinlock_unlock(client_lock); + return 0; +} + +/* + * Client function: + * Deregister a file descriptor from the server. Return -1 on error. + */ +int _odp_fdserver_deregister_fd(fd_server_context_e context, uint64_t key) +{ + int s_sock; /* server socket */ + int res; + int command; + int fd; + + odp_spinlock_lock(client_lock); + + ODP_DBG("FD client deregister: pid=%d key=%" PRIu64 "\n", + getpid(), key); + + s_sock = get_socket(); + if (s_sock < 0) { + odp_spinlock_unlock(client_lock); + return(-1); + } + + res = send_fdserver_msg(s_sock, FD_DEREGISTER_REQ, context, key, -1); + if (res < 0) { + ODP_ERR("fd de-registration failure\n"); + close(s_sock); + odp_spinlock_unlock(client_lock); + return -1; + } + + res = recv_fdserver_msg(s_sock, &command, &context, &key, &fd); + + if ((res < 0) || (command != FD_DEREGISTER_ACK)) { + ODP_ERR("fd de-registration failure\n"); + close(s_sock); + odp_spinlock_unlock(client_lock); + return -1; + } + + close(s_sock); + + odp_spinlock_unlock(client_lock); + return 0; +} + +/* + * client function: + * lookup a file descriptor from the server. return -1 on error, + * or the file descriptor on success (>=0). + */ +int _odp_fdserver_lookup_fd(fd_server_context_e context, uint64_t key) +{ + int s_sock; /* server socket */ + int res; + int command; + int fd; + + odp_spinlock_lock(client_lock); + + s_sock = get_socket(); + if (s_sock < 0) { + odp_spinlock_unlock(client_lock); + return(-1); + } + + res = send_fdserver_msg(s_sock, FD_LOOKUP_REQ, context, key, -1); + if (res < 0) { + ODP_ERR("fd lookup failure\n"); + close(s_sock); + odp_spinlock_unlock(client_lock); + return -1; + } + + res = recv_fdserver_msg(s_sock, &command, &context, &key, &fd); + + if ((res < 0) || (command != FD_LOOKUP_ACK)) { + ODP_ERR("fd lookup failure\n"); + close(s_sock); + odp_spinlock_unlock(client_lock); + return -1; + } + + close(s_sock); + ODP_DBG("FD client lookup: pid=%d, key=%" PRIu64 ", fd=%d\n", + getpid(), key, fd); + + odp_spinlock_unlock(client_lock); + return fd; +} + +/* + * request server terminaison: + */ +static int stop_server(void) +{ + int s_sock; /* server socket */ + int res; + + odp_spinlock_lock(client_lock); + + ODP_DBG("FD sending server stop request\n"); + + s_sock = get_socket(); + if (s_sock < 0) { + odp_spinlock_unlock(client_lock); + return(-1); + } + + res = send_fdserver_msg(s_sock, FD_SERVERSTOP_REQ, 0, 0, -1); + if (res < 0) { + ODP_ERR("fd stop request failure\n"); + close(s_sock); + odp_spinlock_unlock(client_lock); + return -1; + } + + close(s_sock); + + odp_spinlock_unlock(client_lock); + return 0; +} + +/* + * server function + * receive a client request and handle it. + * Always returns 0 unless a stop request is received. + */ +static int handle_request(int client_sock) +{ + int command; + fd_server_context_e context; + uint64_t key; + int fd; + int i; + + /* get a client request: */ + recv_fdserver_msg(client_sock, &command, &context, &key, &fd); + switch (command) { + case FD_REGISTER_REQ: + if ((fd < 0) || (context >= FD_SRV_CTX_END)) { + ODP_ERR("Invalid register fd or context\n"); + send_fdserver_msg(client_sock, FD_REGISTER_NACK, + FD_SRV_CTX_NA, 0, -1); + return 0; + } + + /* store the file descriptor in table: */ + if (fd_table_nb_entries < FDSERVER_MAX_ENTRIES) { + fd_table[fd_table_nb_entries].context = context; + fd_table[fd_table_nb_entries].key = key; + fd_table[fd_table_nb_entries++].fd = fd; + ODP_DBG("storing {ctx=%d, key=%" PRIu64 "}->fd=%d\n", + context, key, fd); + } else { + ODP_ERR("FD table full\n"); + send_fdserver_msg(client_sock, FD_REGISTER_NACK, + FD_SRV_CTX_NA, 0, -1); + return 0; + } + + send_fdserver_msg(client_sock, FD_REGISTER_ACK, + FD_SRV_CTX_NA, 0, -1); + break; + + case FD_LOOKUP_REQ: + if (context >= FD_SRV_CTX_END) { + ODP_ERR("invalid lookup context\n"); + send_fdserver_msg(client_sock, FD_LOOKUP_NACK, + FD_SRV_CTX_NA, 0, -1); + return 0; + } + + /* search key in table and sent reply: */ + for (i = 0; i < fd_table_nb_entries; i++) { + if ((fd_table[i].context == context) && + (fd_table[i].key == key)) { + fd = fd_table[i].fd; + ODP_DBG("lookup {ctx=%d," + " key=%" PRIu64 "}->fd=%d\n", + context, key, fd); + send_fdserver_msg(client_sock, + FD_LOOKUP_ACK, context, key, + fd); + return 0; + } + } + + /* context+key not found... send nack */ + send_fdserver_msg(client_sock, FD_LOOKUP_NACK, context, key, + -1); + break; + + case FD_DEREGISTER_REQ: + if (context >= FD_SRV_CTX_END) { + ODP_ERR("invalid deregister context\n"); + send_fdserver_msg(client_sock, FD_DEREGISTER_NACK, + FD_SRV_CTX_NA, 0, -1); + return 0; + } + + /* search key in table and remove it if found, and reply: */ + for (i = 0; i < fd_table_nb_entries; i++) { + if ((fd_table[i].context == context) && + (fd_table[i].key == key)) { + ODP_DBG("drop {ctx=%d," + " key=%" PRIu64 "}->fd=%d\n", + context, key, fd_table[i].fd); + close(fd_table[i].fd); + fd_table[i] = fd_table[--fd_table_nb_entries]; + send_fdserver_msg(client_sock, + FD_DEREGISTER_ACK, + context, key, -1); + return 0; + } + } + + /* key not found... send nack */ + send_fdserver_msg(client_sock, FD_DEREGISTER_NACK, + context, key, -1); + break; + + case FD_SERVERSTOP_REQ: + ODP_DBG("Stoping FD server\n"); + return 1; + + default: + ODP_ERR("Unexpected request\n"); + break; + } + return 0; +} + +/* + * server function + * loop forever, handling client requests one by one + */ +static void wait_requests(int sock) +{ + int c_socket; /* client connection */ + unsigned int addr_sz; + struct sockaddr_un remote; + + for (;;) { + addr_sz = sizeof(remote); + c_socket = accept(sock, (struct sockaddr *)&remote, &addr_sz); + if (c_socket == -1) { + ODP_ERR("wait_requests: %s\n", strerror(errno)); + return; + } + + if (handle_request(c_socket)) + break; + close(c_socket); + } + close(c_socket); +} + +/* + * Create a unix domain socket and fork a process to listen to incoming + * requests. + */ +int _odp_fdserver_init_global(void) +{ + char sockpath[FDSERVER_SOCKPATH_MAXLEN]; + int sock; + struct sockaddr_un local; + pid_t server_pid; + int res; + + /* create the client spinlock that any client can see: */ + client_lock = mmap(NULL, sizeof(odp_spinlock_t), PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_ANONYMOUS, -1, 0); + + odp_spinlock_init(client_lock); + + /* construct the server named socket path: */ + snprintf(sockpath, FDSERVER_SOCKPATH_MAXLEN, FDSERVER_SOCKPATH_FORMAT, + odp_global_data.main_pid); + + /* create UNIX domain socket: */ + sock = socket(AF_UNIX, SOCK_STREAM, 0); + if (sock == -1) { + ODP_ERR("_odp_fdserver_init_global: %s\n", strerror(errno)); + return(-1); + } + + /* remove previous named socket if it already exists: */ + unlink(sockpath); + + /* bind to new named socket: */ + local.sun_family = AF_UNIX; + strncpy(local.sun_path, sockpath, sizeof(local.sun_path)); + res = bind(sock, (struct sockaddr *)&local, sizeof(struct sockaddr_un)); + if (res == -1) { + ODP_ERR("_odp_fdserver_init_global: %s\n", strerror(errno)); + close(sock); + return(-1); + } + + /* listen for incoming conections: */ + if (listen(sock, FDSERVER_BACKLOG) == -1) { + ODP_ERR("_odp_fdserver_init_global: %s\n", strerror(errno)); + close(sock); + return(-1); + } + + /* fork a server process: */ + server_pid = fork(); + if (server_pid == -1) { + ODP_ERR("Could not fork!\n"); + close(sock); + return(-1); + } + + if (server_pid == 0) { /*child */ + /* TODO: pin the server on appropriate service cpu mask */ + /* when (if) we can agree on the usage of service mask */ + + wait_requests(sock); /* Returns when server is stopped */ + close(sock); + exit(0); + } + + /* parent */ + close(sock); + return 0; +} + +/* + * Terminate the server + */ +int _odp_fdserver_term_global(void) +{ + int status; + char sockpath[FDSERVER_SOCKPATH_MAXLEN]; + + /* close the server and wait for child terminaison*/ + stop_server(); + wait(&status); + + /* construct the server named socket path: */ + snprintf(sockpath, FDSERVER_SOCKPATH_MAXLEN, FDSERVER_SOCKPATH_FORMAT, + odp_global_data.main_pid); + + /* delete the UNIX domain socket: */ + unlink(sockpath); + + return 0; +} diff --git a/platform/linux-generic/include/_fdserver_internal.h b/platform/linux-generic/include/_fdserver_internal.h new file mode 100644 index 0000000..480ac02 --- /dev/null +++ b/platform/linux-generic/include/_fdserver_internal.h @@ -0,0 +1,38 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#ifndef _FD_SERVER_INTERNAL_H +#define _FD_SERVER_INTERNAL_H + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * the following enum defines the different contextes by which the + * FD server may be used: In the FD server, the keys used to store/retrieve + * a file descriptor are actually context based: + * Both the context and the key are stored at fd registration time, + * and both the context and the key are used to retrieve a fd. + * In other words a context identifies a FD server usage, so that different + * unrelated fd server users do not have to guarantee key unicity between + * them. + */ +typedef enum fd_server_context { + FD_SRV_CTX_NA, /* Not Applicable */ + FD_SRV_CTX_END, /* upper enum limit */ +} fd_server_context_e; + +int _odp_fdserver_register_fd(fd_server_context_e context, uint64_t key, + int fd); +int _odp_fdserver_deregister_fd(fd_server_context_e context, uint64_t key); +int _odp_fdserver_lookup_fd(fd_server_context_e context, uint64_t key); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/platform/linux-generic/include/odp_internal.h b/platform/linux-generic/include/odp_internal.h index 3429781..e221435 100644 --- a/platform/linux-generic/include/odp_internal.h +++ b/platform/linux-generic/include/odp_internal.h @@ -53,6 +53,7 @@ enum init_stage { CPUMASK_INIT, TIME_INIT, SYSINFO_INIT, + FDSERVER_INIT, SHM_INIT, THREAD_INIT, POOL_INIT, @@ -118,6 +119,9 @@ int odp_tm_term_global(void); int _odp_int_name_tbl_init_global(void); int _odp_int_name_tbl_term_global(void);
+int _odp_fdserver_init_global(void); +int _odp_fdserver_term_global(void); + int cpuinfo_parser(FILE *file, system_info_t *sysinfo); uint64_t odp_cpu_hz_current(int id);
diff --git a/platform/linux-generic/odp_init.c b/platform/linux-generic/odp_init.c index 77f4f8a..1129779 100644 --- a/platform/linux-generic/odp_init.c +++ b/platform/linux-generic/odp_init.c @@ -51,6 +51,12 @@ int odp_init_global(odp_instance_t *instance, } stage = SYSINFO_INIT;
+ if (_odp_fdserver_init_global()) { + ODP_ERR("ODP fdserver init failed.\n"); + goto init_failed; + } + stage = FDSERVER_INIT; + if (odp_shm_init_global()) { ODP_ERR("ODP shm init failed.\n"); goto init_failed; @@ -217,6 +223,13 @@ int _odp_term_global(enum init_stage stage) } /* Fall through */
+ case FDSERVER_INIT: + if (_odp_fdserver_term_global()) { + ODP_ERR("ODP fdserver term failed.\n"); + rc = -1; + } + /* Fall through */ + case SYSINFO_INIT: if (odp_system_info_term()) { ODP_ERR("ODP system info term failed.\n");
commit eda2ce7c6d9998155edde42617501cbaea5e03f5 Author: Christophe Milard christophe.milard@linaro.org Date: Sat Aug 20 09:45:51 2016 +0200
linux-gen: cosmetic changes on barrier
To please check-patch before the copy to the drv interface.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Brian Brooks brian.brooks@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/barrier.h b/include/odp/api/spec/barrier.h index 678d39a..6de683c 100644 --- a/include/odp/api/spec/barrier.h +++ b/include/odp/api/spec/barrier.h @@ -4,7 +4,6 @@ * SPDX-License-Identifier: BSD-3-Clause */
- /** * @file * @@ -44,7 +43,6 @@ extern "C" { */ void odp_barrier_init(odp_barrier_t *barr, int count);
- /** * Synchronize thread execution on barrier. * Wait for all threads to arrive at the barrier until they are let loose again. diff --git a/platform/linux-generic/include/odp/api/plat/barrier_types.h b/platform/linux-generic/include/odp/api/plat/barrier_types.h index 440275e..00b383c 100644 --- a/platform/linux-generic/include/odp/api/plat/barrier_types.h +++ b/platform/linux-generic/include/odp/api/plat/barrier_types.h @@ -4,7 +4,6 @@ * SPDX-License-Identifier: BSD-3-Clause */
- /** * @file * diff --git a/platform/linux-generic/odp_barrier.c b/platform/linux-generic/odp_barrier.c index ef10f29..a2c6267 100644 --- a/platform/linux-generic/odp_barrier.c +++ b/platform/linux-generic/odp_barrier.c @@ -37,7 +37,7 @@ void odp_barrier_wait(odp_barrier_t *barrier) count = odp_atomic_fetch_inc_u32(&barrier->bar); wasless = count < barrier->count;
- if (count == 2*barrier->count-1) { + if (count == 2 * barrier->count - 1) { /* Wrap around *atomically* */ odp_atomic_sub_u32(&barrier->bar, 2 * barrier->count); } else {
commit 9b648c4cd201ff8fbcbaf12512482b0a8f952d8f Author: Maxim Uvarov maxim.uvarov@linaro.org Date: Thu Aug 11 17:14:19 2016 +0300
helper: cuckootable: add missing return codes
add missing return codes for non void functions.
Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org
diff --git a/helper/cuckootable.c b/helper/cuckootable.c index 91a73b4..b4fce6c 100644 --- a/helper/cuckootable.c +++ b/helper/cuckootable.c @@ -163,18 +163,19 @@ is_power_of_2(uint32_t n) odph_table_t odph_cuckoo_table_lookup(const char *name) { - odph_cuckoo_table_impl *tbl = NULL; + odph_cuckoo_table_impl *tbl;
if (name == NULL || strlen(name) >= ODPH_TABLE_NAME_LEN) return NULL;
tbl = (odph_cuckoo_table_impl *)odp_shm_addr(odp_shm_lookup(name)); + if (!tbl || tbl->magicword != ODPH_CUCKOO_TABLE_MAGIC_WORD) + return NULL;
- if ( - tbl != NULL && - tbl->magicword == ODPH_CUCKOO_TABLE_MAGIC_WORD && - strcmp(tbl->name, name) == 0) - return (odph_table_t)tbl; + if (strcmp(tbl->name, name)) + return NULL; + + return (odph_table_t)tbl; }
odph_table_t @@ -311,6 +312,9 @@ odph_cuckoo_table_destroy(odph_table_t tbl) int ret, i, j; odph_cuckoo_table_impl *impl = NULL; char pool_name[ODPH_TABLE_NAME_LEN + 3]; + odp_event_t ev; + odp_shm_t shm; + odp_pool_t pool;
if (tbl == NULL) return -1; @@ -333,8 +337,6 @@ odph_cuckoo_table_destroy(odph_table_t tbl) }
/* free all free buffers */ - odp_event_t ev; - while ((ev = odp_queue_deq(impl->free_slots)) != ODP_EVENT_INVALID) { odp_buffer_free(odp_buffer_from_event(ev)); @@ -347,14 +349,26 @@ odph_cuckoo_table_destroy(odph_table_t tbl)
/* destroy key-value pool */ snprintf(pool_name, sizeof(pool_name), "kv_%s", impl->name); - ret = odp_pool_destroy(odp_pool_lookup(pool_name)); + pool = odp_pool_lookup(pool_name); + if (pool == ODP_POOL_INVALID) { + ODPH_DBG("invalid pool\n"); + return -1; + } + + ret = odp_pool_destroy(pool); if (ret != 0) { ODPH_DBG("failed to destroy key-value buffer pool\n"); - return ret; + return -1; }
/* free impl */ - odp_shm_free(odp_shm_lookup(impl->name)); + shm = odp_shm_lookup(impl->name); + if (shm == ODP_SHM_INVALID) { + ODPH_DBG("unable look up shm\n"); + return -1; + } + + return odp_shm_free(shm); }
static uint32_t hash(const odph_cuckoo_table_impl *h, const void *key)
commit f62c9aa3f3f48080a61a1e71cd649f1d65539ff5 Author: Bill Fischofer bill.fischofer@linaro.org Date: Tue Jul 12 21:27:05 2016 -0500
api: byteorder: avoid bitfield order doxygen omissions
Resolve Bug https://bugs.linaro.org/show_bug.cgi?id=2402 by assigning explicit values to ODP_LITTLE_ENDIAN_BITFIELD and ODP_BIT_ENDIAN_BITFIELD to avoid Doxygen warnings. This makes these consistent with the usage for ODP_BIG_ENDIAN and ODP_LITTLE_ENDIAN. Also define the summary variable ODP_BITFIELD_ORDER which can be used similar to ODP_BYTE_ORDER for an explicit test of bitfield endianness.
Note that this requires tests of these fields to change from #ifdef to #if.
Signed-off-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Petri Savolainen petri.savolainen@nokia.com Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/helper/include/odp/helper/tcp.h b/helper/include/odp/helper/tcp.h index cabef90..fd234e5 100644 --- a/helper/include/odp/helper/tcp.h +++ b/helper/include/odp/helper/tcp.h @@ -34,7 +34,7 @@ typedef struct ODP_PACKED { odp_u32be_t ack_no; /**< Acknowledgment number */ union { odp_u16be_t doffset_flags; -#if defined(ODP_BIG_ENDIAN_BITFIELD) +#if ODP_BIG_ENDIAN_BITFIELD struct { odp_u16be_t rsvd1:8; odp_u16be_t flags:8; /**< TCP flags as a byte */ @@ -51,7 +51,7 @@ typedef struct ODP_PACKED { odp_u16be_t syn:1; odp_u16be_t fin:1; }; -#elif defined(ODP_LITTLE_ENDIAN_BITFIELD) +#elif ODP_LITTLE_ENDIAN_BITFIELD struct { odp_u16be_t flags:8; odp_u16be_t rsvd1:8; /**< TCP flags as a byte */ diff --git a/include/odp/api/spec/byteorder.h b/include/odp/api/spec/byteorder.h index 802a015..2899adb 100644 --- a/include/odp/api/spec/byteorder.h +++ b/include/odp/api/spec/byteorder.h @@ -39,6 +39,9 @@ extern "C" { * * @def ODP_BYTE_ORDER * Selected byte order + * + * @def ODP_BITFIELD_ORDER + * Selected bitfield order */
/** diff --git a/platform/linux-generic/include/odp/api/plat/byteorder_types.h b/platform/linux-generic/include/odp/api/plat/byteorder_types.h index 679d4cf..09235b5 100644 --- a/platform/linux-generic/include/odp/api/plat/byteorder_types.h +++ b/platform/linux-generic/include/odp/api/plat/byteorder_types.h @@ -52,12 +52,16 @@ extern "C" { #define ODP_LITTLE_ENDIAN 1 #define ODP_BIG_ENDIAN 0 #define ODP_BYTE_ORDER ODP_LITTLE_ENDIAN - #define ODP_LITTLE_ENDIAN_BITFIELD + #define ODP_LITTLE_ENDIAN_BITFIELD 1 + #define ODP_BIG_ENDIAN_BITFIELD 0 + #define ODP_BITFIELD_ORDER ODP_LITTLE_ENDIAN_BITFIELD #else #define ODP_LITTLE_ENDIAN 0 #define ODP_BIG_ENDIAN 1 #define ODP_BYTE_ORDER ODP_BIG_ENDIAN - #define ODP_BIG_ENDIAN_BITFIELD + #define ODP_LITTLE_ENDIAN_BITFIELD 0 + #define ODP_BIG_ENDIAN_BITFIELD 1 + #define ODP_BITFIELD_ORDER ODP_BIG_ENDIAN_BITFIELD #endif
typedef uint16_t __odp_bitwise odp_u16le_t; diff --git a/platform/linux-generic/include/protocols/tcp.h b/platform/linux-generic/include/protocols/tcp.h index 4e92e4b..114262e 100644 --- a/platform/linux-generic/include/protocols/tcp.h +++ b/platform/linux-generic/include/protocols/tcp.h @@ -34,7 +34,7 @@ typedef struct ODP_PACKED { odp_u32be_t ack_no; /**< Acknowledgment number */ union { odp_u16be_t doffset_flags; -#if defined(ODP_BIG_ENDIAN_BITFIELD) +#if ODP_BIG_ENDIAN_BITFIELD struct { odp_u16be_t rsvd1:8; odp_u16be_t flags:8; /**< TCP flags as a byte */ @@ -51,7 +51,7 @@ typedef struct ODP_PACKED { odp_u16be_t syn:1; odp_u16be_t fin:1; }; -#elif defined(ODP_LITTLE_ENDIAN_BITFIELD) +#elif ODP_LITTLE_ENDIAN_BITFIELD struct { odp_u16be_t flags:8; odp_u16be_t rsvd1:8; /**< TCP flags as a byte */
commit 244916315fdcb665f66b4f235782feee26a53c89 Author: Christophe Milard christophe.milard@linaro.org Date: Fri Jul 22 15:19:17 2016 +0200
helper: test: gitignore add iplookuptable
Obviously a miss from commit: c4aefb88d31452b3add8cf16f9eef152525c3e93
Signed-off-by: Christophe Milard christophe.milard@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/helper/test/.gitignore b/helper/test/.gitignore index 482fdb5..e5b6a0f 100644 --- a/helper/test/.gitignore +++ b/helper/test/.gitignore @@ -2,6 +2,7 @@ *.log chksum cuckootable +iplookuptable odpthreads parse process
commit e5d2edccc685fcde88793f5514d1fdb2654ecfa4 Author: Mike Holmes mike.holmes@linaro.org Date: Mon Jul 11 12:46:50 2016 -0400
doc: driver-guide: initial revision
Add an initial driver interface document structure for the existing driver framework.
Signed-off-by: Mike Holmes mike.holmes@linaro.org Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Reviewed-by: Christophe Milard christophe.milard@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/configure.ac b/configure.ac index 54d62f7..3a20959 100644 --- a/configure.ac +++ b/configure.ac @@ -215,7 +215,9 @@ DX_INIT_DOXYGEN($PACKAGE_NAME, ${srcdir}/doc/helper-guide/Doxyfile, ${builddir}/doc/helper-guide/output, ${srcdir}/doc/platform-api-guide/Doxyfile, - ${builddir}/doc/platform-api-guide/output) + ${builddir}/doc/platform-api-guide/output, + ${srcdir}/doc/driver-api-guide/Doxyfile, + ${builddir}/doc/driver-api-guide/output)
########################################################################## # Enable/disable ODP_DEBUG_PRINT diff --git a/doc/Makefile.am b/doc/Makefile.am index d49d84b..59d6a6c 100644 --- a/doc/Makefile.am +++ b/doc/Makefile.am @@ -1,4 +1,8 @@ -SUBDIRS = application-api-guide helper-guide platform-api-guide +SUBDIRS = \ + application-api-guide \ + helper-guide \ + platform-api-guide \ + driver-api-guide
if user_guide SUBDIRS += implementers-guide users-guide process-guide diff --git a/doc/driver-api-guide/.gitignore b/doc/driver-api-guide/.gitignore new file mode 100644 index 0000000..53752db --- /dev/null +++ b/doc/driver-api-guide/.gitignore @@ -0,0 +1 @@ +output diff --git a/doc/driver-api-guide/Doxyfile b/doc/driver-api-guide/Doxyfile new file mode 100644 index 0000000..680d1d4 --- /dev/null +++ b/doc/driver-api-guide/Doxyfile @@ -0,0 +1,14 @@ +@INCLUDE = $(SRCDIR)/doc/Doxyfile_common + +PROJECT_NAME = "Driver Interface (drv) Reference Manual" +PROJECT_NUMBER = $(VERSION) +PROJECT_LOGO = $(SRCDIR)/doc/images/ODP-Logo-HQ.svg +INPUT = $(SRCDIR)/doc/driver-api-guide \ + $(SRCDIR)/include/odp/drv \ + $(SRCDIR)/include/odp_drv.h +EXCLUDE_PATTERNS = drv* odp_drv.h +EXAMPLE_PATH = $(SRCDIR)/example $(SRCDIR) +PREDEFINED = __GNUC__ \ + "ODP_HANDLE_T(type)=odp_handle_t type" \ + odpdrv_bool_t=int +WARNINGS = NO diff --git a/doc/driver-api-guide/Makefile.am b/doc/driver-api-guide/Makefile.am new file mode 100644 index 0000000..4fc4755 --- /dev/null +++ b/doc/driver-api-guide/Makefile.am @@ -0,0 +1,5 @@ +EXTRA_DIST = \ + odp.dox + +clean-local: + rm -rf output diff --git a/doc/driver-api-guide/odp.dox b/doc/driver-api-guide/odp.dox new file mode 100644 index 0000000..687a79e --- /dev/null +++ b/doc/driver-api-guide/odp.dox @@ -0,0 +1,20 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @mainpage + * + * @section sec_1 Introduction + * + * OpenDataPlane (ODP) provides a driver interface + + * + * @section contact Contact Details + * - The main web site is http://www.opendataplane.org/ + * - The git repo is https://git.linaro.org/lng/odp.git + * - Bug tracking is https://bugs.linaro.org/buglist.cgi?product=OpenDataPlane + * + */ diff --git a/doc/m4/configure.m4 b/doc/m4/configure.m4 index ed9451d..6e02f76 100644 --- a/doc/m4/configure.m4 +++ b/doc/m4/configure.m4 @@ -42,4 +42,5 @@ AC_CONFIG_FILES([doc/application-api-guide/Makefile doc/Makefile doc/platform-api-guide/Makefile doc/process-guide/Makefile - doc/users-guide/Makefile]) + doc/users-guide/Makefile + doc/driver-api-guide/Makefile])
commit 52553fa3972109a47e948321236dcabf29503f61 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Jul 21 14:06:24 2016 +0200
linux-generic: cosmetic changes on spinlock
To please check-patch before the copy to the drv interface.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Mike Holmes mike.holmes@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/spinlock.h b/include/odp/api/spec/spinlock.h index 87f9b83..11b7339 100644 --- a/include/odp/api/spec/spinlock.h +++ b/include/odp/api/spec/spinlock.h @@ -4,7 +4,6 @@ * SPDX-License-Identifier: BSD-3-Clause */
- /** * @file * @@ -41,7 +40,6 @@ extern "C" { */ void odp_spinlock_init(odp_spinlock_t *splock);
- /** * Acquire spin lock. * @@ -49,7 +47,6 @@ void odp_spinlock_init(odp_spinlock_t *splock); */ void odp_spinlock_lock(odp_spinlock_t *splock);
- /** * Try to acquire spin lock. * @@ -60,7 +57,6 @@ void odp_spinlock_lock(odp_spinlock_t *splock); */ int odp_spinlock_trylock(odp_spinlock_t *splock);
- /** * Release spin lock. * @@ -68,7 +64,6 @@ int odp_spinlock_trylock(odp_spinlock_t *splock); */ void odp_spinlock_unlock(odp_spinlock_t *splock);
- /** * Check if spin lock is busy (locked). * @@ -79,8 +74,6 @@ void odp_spinlock_unlock(odp_spinlock_t *splock); */ int odp_spinlock_is_locked(odp_spinlock_t *splock);
- - /** * @} */ diff --git a/platform/linux-generic/odp_spinlock.c b/platform/linux-generic/odp_spinlock.c index 6fc138b..cb0f053 100644 --- a/platform/linux-generic/odp_spinlock.c +++ b/platform/linux-generic/odp_spinlock.c @@ -13,7 +13,6 @@ void odp_spinlock_init(odp_spinlock_t *spinlock) _odp_atomic_flag_init(&spinlock->lock, 0); }
- void odp_spinlock_lock(odp_spinlock_t *spinlock) { /* While the lock is already taken... */ @@ -25,19 +24,16 @@ void odp_spinlock_lock(odp_spinlock_t *spinlock) odp_cpu_pause(); }
- int odp_spinlock_trylock(odp_spinlock_t *spinlock) { return (_odp_atomic_flag_tas(&spinlock->lock) == 0); }
- void odp_spinlock_unlock(odp_spinlock_t *spinlock) { _odp_atomic_flag_clear(&spinlock->lock); }
- int odp_spinlock_is_locked(odp_spinlock_t *spinlock) { return _odp_atomic_flag_load(&spinlock->lock) != 0;
commit 4b751b4dd7e2f000d2ed0268d51878c5ff982c2b Author: Christophe Milard christophe.milard@linaro.org Date: Thu Jul 21 14:06:21 2016 +0200
linux-generic: cosmetic changes on atomic
To please check-patch before the copy to the drv interface.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Mike Holmes mike.holmes@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/atomic.h b/include/odp/api/spec/atomic.h index b8d992d..408829d 100644 --- a/include/odp/api/spec/atomic.h +++ b/include/odp/api/spec/atomic.h @@ -4,7 +4,6 @@ * SPDX-License-Identifier: BSD-3-Clause */
- /** * @file * diff --git a/platform/linux-generic/include/odp/api/plat/atomic_types.h b/platform/linux-generic/include/odp/api/plat/atomic_types.h index 33a0565..a674ac9 100644 --- a/platform/linux-generic/include/odp/api/plat/atomic_types.h +++ b/platform/linux-generic/include/odp/api/plat/atomic_types.h @@ -4,7 +4,6 @@ * SPDX-License-Identifier: BSD-3-Clause */
- /** * @file *
commit 21dad81b51443c49dc51ede63516f4a434e217cc Author: Christophe Milard christophe.milard@linaro.org Date: Thu Jul 21 14:06:14 2016 +0200
linux-generic: cosmetic changes on sync files
To please check-patch before the copy to the drv interface.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Mike Holmes mike.holmes@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/sync.h b/include/odp/api/spec/sync.h index b48e0ab..6f87db5 100644 --- a/include/odp/api/spec/sync.h +++ b/include/odp/api/spec/sync.h @@ -4,7 +4,6 @@ * SPDX-License-Identifier: BSD-3-Clause */
- /** * @file *
commit 474dac39b8d4ad7f2ac3768887e36610188c16c2 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Jul 21 14:06:10 2016 +0200
linux-generic: moving the visibility files one step up
include/odp/api/visibility_begin.h and include/odp/api/visibility_end.h move one step up (in include/odp/) and can therefore be used on other interfaces.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Mike Holmes mike.holmes@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/include/odp/api/spec/align.h b/include/odp/api/spec/align.h index cbe7d67..fdf8c29 100644 --- a/include/odp/api/spec/align.h +++ b/include/odp/api/spec/align.h @@ -13,7 +13,7 @@
#ifndef ODP_API_ALIGN_H_ #define ODP_API_ALIGN_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -75,5 +75,5 @@ extern "C" { } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/atomic.h b/include/odp/api/spec/atomic.h index 36c50cb..b8d992d 100644 --- a/include/odp/api/spec/atomic.h +++ b/include/odp/api/spec/atomic.h @@ -13,7 +13,7 @@
#ifndef ODP_API_ATOMIC_H_ #define ODP_API_ATOMIC_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -629,5 +629,5 @@ int odp_atomic_lock_free_u64(odp_atomic_op_t *atomic_op); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/barrier.h b/include/odp/api/spec/barrier.h index fbd1072..678d39a 100644 --- a/include/odp/api/spec/barrier.h +++ b/include/odp/api/spec/barrier.h @@ -13,7 +13,7 @@
#ifndef ODP_API_BARRIER_H_ #define ODP_API_BARRIER_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -64,5 +64,5 @@ void odp_barrier_wait(odp_barrier_t *barr); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/buffer.h b/include/odp/api/spec/buffer.h index 5c632b5..94829b3 100644 --- a/include/odp/api/spec/buffer.h +++ b/include/odp/api/spec/buffer.h @@ -13,7 +13,7 @@
#ifndef ODP_API_BUFFER_H_ #define ODP_API_BUFFER_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -169,5 +169,5 @@ uint64_t odp_buffer_to_u64(odp_buffer_t hdl); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/byteorder.h b/include/odp/api/spec/byteorder.h index 1018997..802a015 100644 --- a/include/odp/api/spec/byteorder.h +++ b/include/odp/api/spec/byteorder.h @@ -13,7 +13,7 @@
#ifndef ODP_API_BYTEORDER_H_ #define ODP_API_BYTEORDER_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -178,5 +178,5 @@ odp_u64le_t odp_cpu_to_le_64(uint64_t cpu64); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/classification.h b/include/odp/api/spec/classification.h index 523a8c4..189c91f 100644 --- a/include/odp/api/spec/classification.h +++ b/include/odp/api/spec/classification.h @@ -13,7 +13,7 @@
#ifndef ODP_API_CLASSIFY_H_ #define ODP_API_CLASSIFY_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -499,5 +499,5 @@ uint64_t odp_pmr_to_u64(odp_pmr_t hdl); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/compiler.h b/include/odp/api/spec/compiler.h index d271e90..c88350e 100644 --- a/include/odp/api/spec/compiler.h +++ b/include/odp/api/spec/compiler.h @@ -13,7 +13,7 @@
#ifndef ODP_API_COMPILER_H_ #define ODP_API_COMPILER_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -49,5 +49,5 @@ extern "C" { } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/cpu.h b/include/odp/api/spec/cpu.h index 2789511..0f47e47 100644 --- a/include/odp/api/spec/cpu.h +++ b/include/odp/api/spec/cpu.h @@ -13,7 +13,7 @@
#ifndef ODP_CPU_H_ #define ODP_CPU_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -177,5 +177,5 @@ void odp_cpu_pause(void); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/cpumask.h b/include/odp/api/spec/cpumask.h index 6e16fd0..22d8e8f 100644 --- a/include/odp/api/spec/cpumask.h +++ b/include/odp/api/spec/cpumask.h @@ -13,7 +13,7 @@
#ifndef ODP_API_CPUMASK_H_ #define ODP_API_CPUMASK_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -250,5 +250,5 @@ int odp_cpumask_all_available(odp_cpumask_t *mask); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/crypto.h b/include/odp/api/spec/crypto.h index dea1fe9..0cb8814 100644 --- a/include/odp/api/spec/crypto.h +++ b/include/odp/api/spec/crypto.h @@ -13,7 +13,7 @@
#ifndef ODP_API_CRYPTO_H_ #define ODP_API_CRYPTO_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -464,5 +464,5 @@ uint64_t odp_crypto_compl_to_u64(odp_crypto_compl_t hdl); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/debug.h b/include/odp/api/spec/debug.h index a49dff3..b3b170f 100644 --- a/include/odp/api/spec/debug.h +++ b/include/odp/api/spec/debug.h @@ -11,7 +11,7 @@
#ifndef ODP_API_DEBUG_H_ #define ODP_API_DEBUG_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -32,5 +32,5 @@ extern "C" { } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/errno.h b/include/odp/api/spec/errno.h index a1e7642..9b60a98 100644 --- a/include/odp/api/spec/errno.h +++ b/include/odp/api/spec/errno.h @@ -12,7 +12,7 @@
#ifndef ODP_ERRNO_H_ #define ODP_ERRNO_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -83,5 +83,5 @@ const char *odp_errno_str(int errnum); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/event.h b/include/odp/api/spec/event.h index 082768f..fdfa52d 100644 --- a/include/odp/api/spec/event.h +++ b/include/odp/api/spec/event.h @@ -13,7 +13,7 @@
#ifndef ODP_API_EVENT_H_ #define ODP_API_EVENT_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -83,5 +83,5 @@ void odp_event_free(odp_event_t event); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/hash.h b/include/odp/api/spec/hash.h index 07a0156..66b740e 100644 --- a/include/odp/api/spec/hash.h +++ b/include/odp/api/spec/hash.h @@ -12,7 +12,7 @@
#ifndef ODP_API_HASH_H_ #define ODP_API_HASH_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -96,5 +96,5 @@ int odp_hash_crc_gen64(const void *data, uint32_t data_len, } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/hints.h b/include/odp/api/spec/hints.h index ff5099c..82400f0 100644 --- a/include/odp/api/spec/hints.h +++ b/include/odp/api/spec/hints.h @@ -13,7 +13,7 @@
#ifndef ODP_API_HINTS_H_ #define ODP_API_HINTS_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -114,5 +114,5 @@ extern "C" { } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/init.h b/include/odp/api/spec/init.h index fec6774..154cdf8 100644 --- a/include/odp/api/spec/init.h +++ b/include/odp/api/spec/init.h @@ -21,7 +21,7 @@
#ifndef ODP_API_INIT_H_ #define ODP_API_INIT_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -277,5 +277,5 @@ int odp_term_local(void); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/packet.h b/include/odp/api/spec/packet.h index 522adb2..4a14f2d 100644 --- a/include/odp/api/spec/packet.h +++ b/include/odp/api/spec/packet.h @@ -13,7 +13,7 @@
#ifndef ODP_API_PACKET_H_ #define ODP_API_PACKET_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -1402,5 +1402,5 @@ uint64_t odp_packet_seg_to_u64(odp_packet_seg_t hdl); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/packet_flags.h b/include/odp/api/spec/packet_flags.h index c2998c1..377b75b 100644 --- a/include/odp/api/spec/packet_flags.h +++ b/include/odp/api/spec/packet_flags.h @@ -13,7 +13,7 @@
#ifndef ODP_API_PACKET_FLAGS_H_ #define ODP_API_PACKET_FLAGS_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -494,5 +494,5 @@ void odp_packet_has_ts_clr(odp_packet_t pkt); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/packet_io.h b/include/odp/api/spec/packet_io.h index c7373fd..d46e405 100644 --- a/include/odp/api/spec/packet_io.h +++ b/include/odp/api/spec/packet_io.h @@ -13,7 +13,7 @@
#ifndef ODP_API_PACKET_IO_H_ #define ODP_API_PACKET_IO_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -1074,5 +1074,5 @@ odp_time_t odp_pktin_ts_from_ns(odp_pktio_t pktio, uint64_t ns); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/packet_io_stats.h b/include/odp/api/spec/packet_io_stats.h index 73cf704..299ecd0 100644 --- a/include/odp/api/spec/packet_io_stats.h +++ b/include/odp/api/spec/packet_io_stats.h @@ -12,7 +12,7 @@
#ifndef ODP_API_PACKET_IO_STATS_H_ #define ODP_API_PACKET_IO_STATS_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -139,5 +139,5 @@ int odp_pktio_stats_reset(odp_pktio_t pktio); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/pool.h b/include/odp/api/spec/pool.h index b31b6aa..c80c98a 100644 --- a/include/odp/api/spec/pool.h +++ b/include/odp/api/spec/pool.h @@ -13,7 +13,7 @@
#ifndef ODP_API_POOL_H_ #define ODP_API_POOL_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -327,5 +327,5 @@ void odp_pool_param_init(odp_pool_param_t *param); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/queue.h b/include/odp/api/spec/queue.h index 92822da..31dc9f5 100644 --- a/include/odp/api/spec/queue.h +++ b/include/odp/api/spec/queue.h @@ -13,7 +13,7 @@
#ifndef ODP_API_QUEUE_H_ #define ODP_API_QUEUE_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -413,5 +413,5 @@ int odp_queue_info(odp_queue_t queue, odp_queue_info_t *info); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/random.h b/include/odp/api/spec/random.h index db77630..00fa15b 100644 --- a/include/odp/api/spec/random.h +++ b/include/odp/api/spec/random.h @@ -13,7 +13,7 @@
#ifndef ODP_API_RANDOM_H_ #define ODP_API_RANDOM_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -45,5 +45,5 @@ int32_t odp_random_data(uint8_t *buf, int32_t size, odp_bool_t use_entropy); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/rwlock.h b/include/odp/api/spec/rwlock.h index 2624b56..ff8a3f2 100644 --- a/include/odp/api/spec/rwlock.h +++ b/include/odp/api/spec/rwlock.h @@ -6,7 +6,7 @@
#ifndef ODP_API_RWLOCK_H_ #define ODP_API_RWLOCK_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
/** * @file @@ -100,5 +100,5 @@ void odp_rwlock_write_unlock(odp_rwlock_t *rwlock); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif /* ODP_RWLOCK_H_ */ diff --git a/include/odp/api/spec/rwlock_recursive.h b/include/odp/api/spec/rwlock_recursive.h index 9d50f20..1c19c72 100644 --- a/include/odp/api/spec/rwlock_recursive.h +++ b/include/odp/api/spec/rwlock_recursive.h @@ -12,7 +12,7 @@
#ifndef ODP_API_RWLOCK_RECURSIVE_H_ #define ODP_API_RWLOCK_RECURSIVE_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -118,5 +118,5 @@ void odp_rwlock_recursive_write_unlock(odp_rwlock_recursive_t *lock); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/schedule.h b/include/odp/api/spec/schedule.h index f8fed17..d924da2 100644 --- a/include/odp/api/spec/schedule.h +++ b/include/odp/api/spec/schedule.h @@ -13,7 +13,7 @@
#ifndef ODP_API_SCHEDULE_H_ #define ODP_API_SCHEDULE_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -375,5 +375,5 @@ void odp_schedule_order_unlock(unsigned lock_index); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/schedule_types.h b/include/odp/api/spec/schedule_types.h index b7c1980..8a4e42c 100644 --- a/include/odp/api/spec/schedule_types.h +++ b/include/odp/api/spec/schedule_types.h @@ -12,7 +12,7 @@
#ifndef ODP_API_SCHEDULE_TYPES_H_ #define ODP_API_SCHEDULE_TYPES_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -157,5 +157,5 @@ typedef struct odp_schedule_param_t { } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/shared_memory.h b/include/odp/api/spec/shared_memory.h index fbe0fde..8c76807 100644 --- a/include/odp/api/spec/shared_memory.h +++ b/include/odp/api/spec/shared_memory.h @@ -13,7 +13,7 @@
#ifndef ODP_API_SHARED_MEMORY_H_ #define ODP_API_SHARED_MEMORY_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -187,5 +187,5 @@ uint64_t odp_shm_to_u64(odp_shm_t hdl); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/spinlock.h b/include/odp/api/spec/spinlock.h index 8263171..87f9b83 100644 --- a/include/odp/api/spec/spinlock.h +++ b/include/odp/api/spec/spinlock.h @@ -13,7 +13,7 @@
#ifndef ODP_API_SPINLOCK_H_ #define ODP_API_SPINLOCK_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -89,5 +89,5 @@ int odp_spinlock_is_locked(odp_spinlock_t *splock); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/spinlock_recursive.h b/include/odp/api/spec/spinlock_recursive.h index 07829fd..c9c7ddb 100644 --- a/include/odp/api/spec/spinlock_recursive.h +++ b/include/odp/api/spec/spinlock_recursive.h @@ -12,7 +12,7 @@
#ifndef ODP_API_SPINLOCK_RECURSIVE_H_ #define ODP_API_SPINLOCK_RECURSIVE_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -83,5 +83,5 @@ int odp_spinlock_recursive_is_locked(odp_spinlock_recursive_t *lock); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/std_clib.h b/include/odp/api/spec/std_clib.h index 772732c..33e9db5 100644 --- a/include/odp/api/spec/std_clib.h +++ b/include/odp/api/spec/std_clib.h @@ -12,7 +12,7 @@
#ifndef ODP_API_STD_CLIB_H_ #define ODP_API_STD_CLIB_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -80,5 +80,5 @@ int odp_memcmp(const void *ptr1, const void *ptr2, size_t num); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/std_types.h b/include/odp/api/spec/std_types.h index 47018d5..ec6a6df 100644 --- a/include/odp/api/spec/std_types.h +++ b/include/odp/api/spec/std_types.h @@ -14,7 +14,7 @@
#ifndef ODP_API_STD_TYPES_H_ #define ODP_API_STD_TYPES_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -39,5 +39,5 @@ extern "C" { } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/sync.h b/include/odp/api/spec/sync.h index 84b7cb9..b48e0ab 100644 --- a/include/odp/api/spec/sync.h +++ b/include/odp/api/spec/sync.h @@ -13,7 +13,7 @@
#ifndef ODP_API_SYNC_H_ #define ODP_API_SYNC_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -88,5 +88,5 @@ void odp_mb_full(void); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/system_info.h b/include/odp/api/spec/system_info.h index c5a5fd0..0bb4f1f 100644 --- a/include/odp/api/spec/system_info.h +++ b/include/odp/api/spec/system_info.h @@ -13,7 +13,7 @@
#ifndef ODP_API_SYSTEM_INFO_H_ #define ODP_API_SYSTEM_INFO_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -52,5 +52,5 @@ int odp_sys_cache_line_size(void); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/thread.h b/include/odp/api/spec/thread.h index 6e2a817..689ba59 100644 --- a/include/odp/api/spec/thread.h +++ b/include/odp/api/spec/thread.h @@ -13,7 +13,7 @@
#ifndef ODP_API_THREAD_H_ #define ODP_API_THREAD_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -110,5 +110,5 @@ odp_thread_type_t odp_thread_type(void); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/thrmask.h b/include/odp/api/spec/thrmask.h index 73f3866..3986769 100644 --- a/include/odp/api/spec/thrmask.h +++ b/include/odp/api/spec/thrmask.h @@ -12,7 +12,7 @@
#ifndef ODP_API_THRMASK_H_ #define ODP_API_THRMASK_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -237,5 +237,5 @@ int odp_thrmask_control(odp_thrmask_t *mask); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/ticketlock.h b/include/odp/api/spec/ticketlock.h index d485565..b23253b 100644 --- a/include/odp/api/spec/ticketlock.h +++ b/include/odp/api/spec/ticketlock.h @@ -13,7 +13,7 @@
#ifndef ODP_API_TICKETLOCK_H_ #define ODP_API_TICKETLOCK_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -88,5 +88,5 @@ int odp_ticketlock_is_locked(odp_ticketlock_t *tklock); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/time.h b/include/odp/api/spec/time.h index a78fc2c..fcc94c9 100644 --- a/include/odp/api/spec/time.h +++ b/include/odp/api/spec/time.h @@ -13,7 +13,7 @@
#ifndef ODP_API_TIME_H_ #define ODP_API_TIME_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -178,5 +178,5 @@ uint64_t odp_time_to_u64(odp_time_t time); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/timer.h b/include/odp/api/spec/timer.h index 3f8fdd4..df37189 100644 --- a/include/odp/api/spec/timer.h +++ b/include/odp/api/spec/timer.h @@ -13,7 +13,7 @@
#ifndef ODP_API_TIMER_H_ #define ODP_API_TIMER_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -413,5 +413,5 @@ uint64_t odp_timeout_to_u64(odp_timeout_t hdl); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/traffic_mngr.h b/include/odp/api/spec/traffic_mngr.h index 3473648..71198bb 100644 --- a/include/odp/api/spec/traffic_mngr.h +++ b/include/odp/api/spec/traffic_mngr.h @@ -6,7 +6,7 @@
#ifndef ODP_TRAFFIC_MNGR_H_ #define ODP_TRAFFIC_MNGR_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -1961,5 +1961,5 @@ void odp_tm_stats_print(odp_tm_t odp_tm); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/include/odp/api/spec/version.h.in b/include/odp/api/spec/version.h.in index 4b16dcc..f5e9e9c 100644 --- a/include/odp/api/spec/version.h.in +++ b/include/odp/api/spec/version.h.in @@ -13,7 +13,7 @@
#ifndef ODP_API_VERSION_H_ #define ODP_API_VERSION_H_ -#include <odp/api/visibility_begin.h> +#include <odp/visibility_begin.h>
#ifdef __cplusplus extern "C" { @@ -103,5 +103,5 @@ const char *odp_version_impl_str(void); } #endif
-#include <odp/api/visibility_end.h> +#include <odp/visibility_end.h> #endif diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 1ccb437..8494569 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -13,6 +13,11 @@ include_HEADERS = \ $(top_srcdir)/include/odp.h \ $(top_srcdir)/include/odp_api.h
+odpincludedir= $(includedir)/odp +odpinclude_HEADERS = \ + $(srcdir)/include/odp/visibility_begin.h \ + $(srcdir)/include/odp/visibility_end.h + odpapiincludedir= $(includedir)/odp/api odpapiinclude_HEADERS = \ $(srcdir)/include/odp/api/align.h \ @@ -56,8 +61,6 @@ odpapiinclude_HEADERS = \ $(srcdir)/include/odp/api/timer.h \ $(srcdir)/include/odp/api/traffic_mngr.h \ $(srcdir)/include/odp/api/version.h \ - $(srcdir)/include/odp/api/visibility_begin.h \ - $(srcdir)/include/odp/api/visibility_end.h \ $(srcdir)/arch/@ARCH_DIR@/odp/api/cpu_arch.h
odpapiplatincludedir= $(includedir)/odp/api/plat diff --git a/platform/linux-generic/include/odp/api/visibility_begin.h b/platform/linux-generic/include/odp/visibility_begin.h similarity index 100% rename from platform/linux-generic/include/odp/api/visibility_begin.h rename to platform/linux-generic/include/odp/visibility_begin.h diff --git a/platform/linux-generic/include/odp/api/visibility_end.h b/platform/linux-generic/include/odp/visibility_end.h similarity index 100% rename from platform/linux-generic/include/odp/api/visibility_end.h rename to platform/linux-generic/include/odp/visibility_end.h
commit b8c6689e4c81277067a2dd5f30598e0ddc7dc2c5 Author: Christophe Milard christophe.milard@linaro.org Date: Thu Jul 21 14:06:09 2016 +0200
linux-generic: Makefile: reintroducing lost change for drv
The change done for commit id 1fcd2369be88a6f4f7a7a93e9bb315d0e65ab128 in the Makefile, and delated after is reintroduced here.
Signed-off-by: Christophe Milard christophe.milard@linaro.org Reviewed-and-tested-by: Mike Holmes mike.holmes@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/platform/linux-generic/Makefile.am b/platform/linux-generic/Makefile.am index 22cf6f3..1ccb437 100644 --- a/platform/linux-generic/Makefile.am +++ b/platform/linux-generic/Makefile.am @@ -96,6 +96,10 @@ odpapiplatinclude_HEADERS = \ $(srcdir)/include/odp/api/plat/traffic_mngr_types.h \ $(srcdir)/include/odp/api/plat/version_types.h
+odpdrvincludedir = $(includedir)/odp/drv +odpdrvinclude_HEADERS = \ + $(srcdir)/include/odp/drv/compiler.h + noinst_HEADERS = \ ${srcdir}/include/odp_align_internal.h \ ${srcdir}/include/odp_atomic_internal.h \
commit ff56734fa58db5043a6bb358611cdc2a1c5de4a3 Author: Ru Jia jiaru@ict.ac.cn Date: Thu Jun 30 16:15:32 2016 +0800
helper: test: add validation test of ip lookup table
Signed-off-by: Ru Jia jiaru@ict.ac.cn Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/helper/test/Makefile.am b/helper/test/Makefile.am index f7aa7e7..2bf6765 100644 --- a/helper/test/Makefile.am +++ b/helper/test/Makefile.am @@ -10,7 +10,8 @@ EXECUTABLES = chksum$(EXEEXT) \ table$(EXEEXT) \ thread$(EXEEXT) \ parse$(EXEEXT)\ - process$(EXEEXT) + process$(EXEEXT) \ + iplookuptable$(EXEEXT)
COMPILE_ONLY = odpthreads
@@ -37,3 +38,4 @@ dist_process_SOURCES = process.c dist_parse_SOURCES = parse.c process_LDADD = $(LIB)/libodphelper-linux.la $(LIB)/libodp-linux.la dist_table_SOURCES = table.c +dist_iplookuptable_SOURCES = iplookuptable.c diff --git a/helper/test/iplookuptable.c b/helper/test/iplookuptable.c new file mode 100644 index 0000000..e1d2820 --- /dev/null +++ b/helper/test/iplookuptable.c @@ -0,0 +1,174 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include <stdio.h> +#include <stdint.h> +#include <string.h> +#include <stdlib.h> +#include <errno.h> + +#include <odp_api.h> +#include <test_debug.h> +#include <../odph_iplookuptable.h> +#include <odp/helper/ip.h> + +static void print_prefix_info( + const char *msg, uint32_t ip, uint8_t cidr) +{ + int i = 0; + uint8_t *ptr = (uint8_t *)(&ip); + + printf("%s IP prefix: ", msg); + for (i = 3; i >= 0; i--) { + if (i != 3) + printf("."); + printf("%d", ptr[i]); + } + printf("/%d\n", cidr); +} + +/* + * Basic sequence of operations for a single key: + * - put short prefix + * - put long prefix + * - get (hit long prefix) + * - remove long prefix + * - get (hit short prefix) + */ +static int test_ip_lookup_table(void) +{ + odph_iplookup_prefix_t prefix1, prefix2; + odph_table_t table; + int ret; + uint64_t value1 = 1, value2 = 2, result = 0; + uint32_t lkp_ip = 0; + + table = odph_iplookup_table_create( + "prefix_test", 0, 0, sizeof(uint32_t)); + if (table == NULL) { + printf("IP prefix lookup table creation failed\n"); + return -1; + } + + ret = odph_ipv4_addr_parse(&prefix1.ip, "192.168.0.0"); + if (ret < 0) { + printf("Failed to get IP addr from str\n"); + odph_iplookup_table_destroy(table); + return -1; + } + prefix1.cidr = 11; + + ret = odph_ipv4_addr_parse(&prefix2.ip, "192.168.0.0"); + if (ret < 0) { + printf("Failed to get IP addr from str\n"); + odph_iplookup_table_destroy(table); + return -1; + } + prefix2.cidr = 24; + + ret = odph_ipv4_addr_parse(&lkp_ip, "192.168.0.1"); + if (ret < 0) { + printf("Failed to get IP addr from str\n"); + odph_iplookup_table_destroy(table); + return -1; + } + + /* test with standard put/get/remove functions */ + ret = odph_iplookup_table_put_value(table, &prefix1, &value1); + print_prefix_info("Add", prefix1.ip, prefix1.cidr); + if (ret < 0) { + printf("Failed to add ip prefix\n"); + odph_iplookup_table_destroy(table); + return -1; + } + + ret = odph_iplookup_table_get_value(table, &lkp_ip, &result, 0); + print_prefix_info("Lkp", lkp_ip, 32); + if (ret < 0 || result != 1) { + printf("Failed to find longest prefix\n"); + odph_iplookup_table_destroy(table); + return -1; + } + + /* add a longer prefix */ + ret = odph_iplookup_table_put_value(table, &prefix2, &value2); + print_prefix_info("Add", prefix2.ip, prefix2.cidr); + if (ret < 0) { + printf("Failed to add ip prefix\n"); + odph_iplookup_table_destroy(table); + return -1; + } + + ret = odph_iplookup_table_get_value(table, &lkp_ip, &result, 0); + print_prefix_info("Lkp", lkp_ip, 32); + if (ret < 0 || result != 2) { + printf("Failed to find longest prefix\n"); + odph_iplookup_table_destroy(table); + return -1; + } + + ret = odph_iplookup_table_remove_value(table, &prefix2); + print_prefix_info("Del", prefix2.ip, prefix2.cidr); + if (ret < 0) { + printf("Failed to delete ip prefix\n"); + odph_iplookup_table_destroy(table); + return -1; + } + + ret = odph_iplookup_table_get_value(table, &lkp_ip, &result, 0); + print_prefix_info("Lkp", lkp_ip, 32); + if (ret < 0 || result != 1) { + printf("Error: found result ater deleting\n"); + odph_iplookup_table_destroy(table); + return -1; + } + + ret = odph_iplookup_table_remove_value(table, &prefix1); + print_prefix_info("Del", prefix1.ip, prefix1.cidr); + if (ret < 0) { + printf("Failed to delete prefix\n"); + odph_iplookup_table_destroy(table); + return -1; + } + + odph_iplookup_table_destroy(table); + return 0; +} + +int main(int argc TEST_UNUSED, char *argv[] TEST_UNUSED) +{ + odp_instance_t instance; + int ret = 0; + + ret = odp_init_global(&instance, NULL, NULL); + if (ret != 0) { + fprintf(stderr, "Error: ODP global init failed.\n"); + exit(EXIT_FAILURE); + } + + ret = odp_init_local(instance, ODP_THREAD_WORKER); + if (ret != 0) { + fprintf(stderr, "Error: ODP local init failed.\n"); + exit(EXIT_FAILURE); + } + + if (test_ip_lookup_table() < 0) + printf("Test failed\n"); + else + printf("All tests passed\n"); + + if (odp_term_local()) { + fprintf(stderr, "Error: ODP local term failed.\n"); + exit(EXIT_FAILURE); + } + + if (odp_term_global(instance)) { + fprintf(stderr, "Error: ODP global term failed.\n"); + exit(EXIT_FAILURE); + } + + return ret; +}
commit fd86c7fd19820e21c182e2c0e043331e6aab6282 Author: Ru Jia jiaru@ict.ac.cn Date: Thu Jun 30 16:15:31 2016 +0800
helper: table: add impl of ip lookup table
This is an implementation of the 16,8,8 ip lookup (longest prefix matching) algorithm. The key of the table is 32-bit IPv4 address.
Signed-off-by: Ru Jia jiaru@ict.ac.cn Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/helper/Makefile.am b/helper/Makefile.am index f7dd324..9d0036d 100644 --- a/helper/Makefile.am +++ b/helper/Makefile.am @@ -30,7 +30,8 @@ noinst_HEADERS = \ $(srcdir)/odph_hashtable.h \ $(srcdir)/odph_lineartable.h \ $(srcdir)/odph_cuckootable.h \ - $(srcdir)/odph_list_internal.h + $(srcdir)/odph_list_internal.h \ + $(srcdir)/odph_iplookuptable.h
__LIB__libodphelper_linux_la_SOURCES = \ eth.c \ @@ -39,6 +40,7 @@ __LIB__libodphelper_linux_la_SOURCES = \ linux.c \ hashtable.c \ lineartable.c \ - cuckootable.c + cuckootable.c \ + iplookuptable.c
lib_LTLIBRARIES = $(LIB)/libodphelper-linux.la diff --git a/helper/iplookuptable.c b/helper/iplookuptable.c new file mode 100644 index 0000000..5f80743 --- /dev/null +++ b/helper/iplookuptable.c @@ -0,0 +1,937 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +#include <string.h> +#include <stdint.h> +#include <errno.h> +#include <stdio.h> + +#include "odph_iplookuptable.h" +#include "odph_list_internal.h" +#include "odph_debug.h" +#include <odp_api.h> + +/** @magic word, write to the first byte of the memory block + * to indicate this block is used by a ip lookup table + */ +#define ODPH_IP_LOOKUP_TABLE_MAGIC_WORD 0xCFCFFCFC + +/* The length(bit) of the IPv4 address */ +#define IP_LENGTH 32 + +/* The number of L1 entries */ +#define ENTRY_NUM_L1 (1 << 16) +/* The size of one L2\L3 subtree */ +#define ENTRY_NUM_SUBTREE (1 << 8) + +#define WHICH_CHILD(ip, cidr) ((ip >> (IP_LENGTH - cidr)) & 0x00000001) + +/** @internal entry struct + * Structure store an entry of the ip prefix table. + * Because of the leaf pushing, each entry of the table must have + * either a child entry, or a nexthop info. + * If child == 0 and index != ODP_BUFFER_INVALID, this entry has + * a nexthop info, index indicates the buffer that stores the + * nexthop value, and ptr points to the address of the buffer. + * If child == 1, this entry has a subtree, index indicates + * the buffer that stores the subtree, and ptr points to the + * address of the buffer. + */ +typedef struct { + union { + uint8_t u8; + struct { +#if ODP_BYTE_ORDER == ODP_BIG_ENDIAN + uint8_t child : 1; + uint8_t cidr : 7; +#else + uint8_t cidr : 7; + uint8_t child : 1; +#endif + }; + }; + union { + odp_buffer_t nexthop; + void *ptr; + }; +} prefix_entry_t; + +#define ENTRY_SIZE (sizeof(prefix_entry_t) + sizeof(odp_buffer_t)) +#define ENTRY_BUFF_ARR(x) ((odp_buffer_t *)((char *)x \ + + sizeof(prefix_entry_t) * ENTRY_NUM_SUBTREE)) + +/** @internal trie node struct + * In this IP lookup algorithm, we use a + * binary tire to detect the overlap prefix. + */ +typedef struct trie_node { + /* tree structure */ + struct trie_node *parent; + struct trie_node *left; + struct trie_node *right; + /* IP prefix length */ + uint8_t cidr; + /* Nexthop buffer index */ + odp_buffer_t nexthop; + /* Buffer that stores this node */ + odp_buffer_t buffer; +} trie_node_t; + +/** Number of L2\L3 entries(subtrees) per cache cube. */ +#define CACHE_NUM_SUBTREE (1 << 13) +/** Number of trie nodes per cache cube. */ +#define CACHE_NUM_TRIE (1 << 20) + +/** @typedef cache_type_t + * Cache node type + */ +typedef enum { + CACHE_TYPE_SUBTREE = 0, + CACHE_TYPE_TRIE +} cache_type_t; + +/** A IP lookup table structure. */ +typedef struct { + /**< for check */ + uint32_t magicword; + /** Name of the hash. */ + char name[ODPH_TABLE_NAME_LEN]; + /** Total L1 entries. */ + prefix_entry_t *l1e; + /** Root node of the binary trie */ + trie_node_t *trie; + /** Length of value. */ + uint32_t nexthop_len; + /** Queues of free slots (caches) + * There are two queues: + * - free_slots[CACHE_TYPE_SUBTREE] is used for L2 and + * L3 entries (subtrees). Each entry stores an 8-bit + * subtree. + * - free_slots[CACHE_TYPE_TRIE] is used for the binary + * trie. Each entry contains a trie node. + */ + odp_queue_t free_slots[2]; + /** The number of pool used by each queue. */ + uint32_t cache_count[2]; +} odph_iplookup_table_impl ODP_ALIGNED_CACHE; + +/*********************************************************** + ***************** Cache management ******************** + ***********************************************************/ + +/** Destroy all caches */ +static void +cache_destroy(odph_iplookup_table_impl *impl) +{ + odp_queue_t queue; + odp_event_t ev; + uint32_t i = 0, count = 0; + char pool_name[ODPH_TABLE_NAME_LEN + 8]; + + /* free all buffers in the queue */ + for (; i < 2; i++) { + queue = impl->free_slots[i]; + if (queue == ODP_QUEUE_INVALID) + continue; + + while ((ev = odp_queue_deq(queue)) + != ODP_EVENT_INVALID) { + odp_buffer_free(odp_buffer_from_event(ev)); + } + odp_queue_destroy(queue); + } + + /* destroy all cache pools */ + for (i = 0; i < 2; i++) { + for (count = 0; count < impl->cache_count[i]; count++) { + sprintf( + pool_name, "%s_%d_%d", + impl->name, i, count); + odp_pool_destroy(odp_pool_lookup(pool_name)); + } + } +} + +/** According to the type of cahce, set the value of + * a buffer to the initial value. + */ +static void +cache_init_buffer(odp_buffer_t buffer, cache_type_t type, uint32_t size) +{ + int i = 0; + void *addr = odp_buffer_addr(buffer); + + memset(addr, 0, size); + if (type == CACHE_TYPE_SUBTREE) { + prefix_entry_t *entry = (prefix_entry_t *)addr; + + for (i = 0; i < ENTRY_NUM_SUBTREE; i++, entry++) + entry->nexthop = ODP_BUFFER_INVALID; + } else if (type == CACHE_TYPE_TRIE) { + trie_node_t *node = (trie_node_t *)addr; + + node->buffer = buffer; + node->nexthop = ODP_BUFFER_INVALID; + } +} + +/** Create a new buffer pool, and insert its buffer into the queue. */ +static int +cache_alloc_new_pool( + odph_iplookup_table_impl *tbl, cache_type_t type) +{ + odp_pool_t pool; + odp_pool_param_t param; + odp_queue_t queue = tbl->free_slots[type]; + + odp_buffer_t buffer; + char pool_name[ODPH_TABLE_NAME_LEN + 8]; + uint32_t size = 0, num = 0; + + /* Create new pool (new free buffers). */ + param.type = ODP_POOL_BUFFER; + param.buf.align = ODP_CACHE_LINE_SIZE; + if (type == CACHE_TYPE_SUBTREE) { + num = CACHE_NUM_SUBTREE; + size = ENTRY_SIZE * ENTRY_NUM_SUBTREE; + } else if (type == CACHE_TYPE_TRIE) { + num = CACHE_NUM_TRIE; + size = sizeof(trie_node_t); + } else { + ODPH_DBG("wrong cache_type_t.\n"); + return -1; + } + param.buf.size = size; + param.buf.num = num; + + sprintf( + pool_name, "%s_%d_%d", + tbl->name, type, tbl->cache_count[type]); + pool = odp_pool_create(pool_name, ¶m); + if (pool == ODP_POOL_INVALID) { + ODPH_DBG("failed to create a new pool.\n"); + return -1; + } + + /* insert new free buffers into queue */ + while ((buffer = odp_buffer_alloc(pool)) + != ODP_BUFFER_INVALID) { + cache_init_buffer(buffer, type, size); + odp_queue_enq(queue, odp_buffer_to_event(buffer)); + } + + tbl->cache_count[type]++; + return 0; +} + +/** Get a new buffer from a cache list. If there is no + * available buffer, allocate a new pool. + */ +static odp_buffer_t +cache_get_buffer(odph_iplookup_table_impl *tbl, cache_type_t type) +{ + odp_buffer_t buffer = ODP_BUFFER_INVALID; + odp_queue_t queue = tbl->free_slots[type]; + + /* get free buffer from queue */ + buffer = odp_buffer_from_event( + odp_queue_deq(queue)); + + /* If there is no free buffer available, allocate new pool */ + if (buffer == ODP_BUFFER_INVALID) { + cache_alloc_new_pool(tbl, type); + buffer = odp_buffer_from_event(odp_queue_deq(queue)); + } + + return buffer; +} + +/*********************************************************** + ****************** Binary trie ******************** + ***********************************************************/ + +/* Initialize the root node of the trie */ +static int +trie_init(odph_iplookup_table_impl *tbl) +{ + trie_node_t *root = NULL; + odp_buffer_t buffer = cache_get_buffer(tbl, CACHE_TYPE_TRIE); + + if (buffer != ODP_BUFFER_INVALID) { + root = (trie_node_t *)odp_buffer_addr(buffer); + root->cidr = 0; + tbl->trie = root; + return 0; + } + + return -1; +} + +/* Destroy the whole trie (recursively) */ +static void +trie_destroy(odph_iplookup_table_impl *tbl, trie_node_t *trie) +{ + if (trie->left != NULL) + trie_destroy(tbl, trie->left); + if (trie->right != NULL) + trie_destroy(tbl, trie->right); + + /* destroy this node */ + odp_queue_enq( + tbl->free_slots[CACHE_TYPE_TRIE], + odp_buffer_to_event(trie->buffer)); +} + +/* Insert a new prefix node into the trie + * If the node is already existed, update its nexthop info, + * Return 0 and set nexthop pointer to INVALID. + * If the node is not exitsed, create this target node and + * all nodes along the path from root to the target node. + * Then return 0 and set nexthop pointer points to the + * new buffer. + * Return -1 for error. + */ +static int +trie_insert_node( + odph_iplookup_table_impl *tbl, trie_node_t *root, + uint32_t ip, uint8_t cidr, odp_buffer_t nexthop) +{ + uint8_t level = 0, child; + odp_buffer_t buf; + trie_node_t *node = root, *prev = root; + + /* create/update all nodes along the path + * from root to the new node. */ + for (level = 1; level <= cidr; level++) { + child = WHICH_CHILD(ip, level); + + node = child == 0 ? prev->left : prev->right; + /* If the child node doesn't exit, create it. */ + if (node == NULL) { + buf = cache_get_buffer(tbl, CACHE_TYPE_TRIE); + if (buf == ODP_BUFFER_INVALID) + return -1; + + node = (trie_node_t *)odp_buffer_addr(buf); + node->cidr = level; + node->parent = prev; + + if (child == 0) + prev->left = node; + else + prev->right = node; + } + prev = node; + } + + /* The final one is the target. */ + node->nexthop = nexthop; + return 0; +} + +/* Delete a node */ +static int +trie_delete_node( + odph_iplookup_table_impl *tbl, + trie_node_t *root, uint32_t ip, uint8_t cidr) +{ + if (root == NULL) + return -1; + + /* The default prefix (root node) cannot be deleted. */ + if (cidr == 0) + return -1; + + trie_node_t *node = root, *prev = NULL; + uint8_t level = 1, child = 0; + odp_buffer_t tmp; + + /* Find the target node. */ + for (level = 1; level <= cidr; level++) { + child = WHICH_CHILD(ip, level); + node = (child == 0) ? node->left : node->right; + if (node == NULL) { + ODPH_DBG("Trie node is not existed\n"); + return -1; + } + } + + node->nexthop = ODP_BUFFER_INVALID; + + /* Delete all redundant nodes along the path. */ + for (level = cidr; level > 0; level--) { + if ( + node->left != NULL || node->right != NULL || + node->nexthop != ODP_BUFFER_INVALID) + break; + + child = WHICH_CHILD(ip, level); + prev = node->parent; + + /* free trie node */ + tmp = node->buffer; + cache_init_buffer( + tmp, CACHE_TYPE_TRIE, sizeof(trie_node_t)); + odp_queue_enq( + tbl->free_slots[CACHE_TYPE_TRIE], + odp_buffer_to_event(tmp)); + + if (child == 0) + prev->left = NULL; + else + prev->right = NULL; + node = prev; + } + return 0; +} + +/* Detect the longest overlapping prefix. */ +static int +trie_detect_overlap( + trie_node_t *trie, uint32_t ip, uint8_t cidr, + uint8_t leaf_push, uint8_t *over_cidr, + odp_buffer_t *over_nexthop) +{ + uint8_t child = 0; + uint32_t level, limit = cidr > leaf_push ? leaf_push + 1 : cidr; + trie_node_t *node = trie, *longest = trie; + + for (level = 1; level < limit; level++) { + child = WHICH_CHILD(ip, level); + node = (child == 0) ? node->left : node->right; + if (node->nexthop != ODP_BUFFER_INVALID) + longest = node; + } + + *over_cidr = longest->cidr; + *over_nexthop = longest->nexthop; + return 0; +} + +/*********************************************************** + *************** IP prefix lookup table **************** + ***********************************************************/ + +odph_table_t +odph_iplookup_table_lookup(const char *name) +{ + odph_iplookup_table_impl *tbl = NULL; + + if (name == NULL || strlen(name) >= ODPH_TABLE_NAME_LEN) + return NULL; + + tbl = (odph_iplookup_table_impl *)odp_shm_addr(odp_shm_lookup(name)); + + if ( + tbl != NULL && + tbl->magicword == ODPH_IP_LOOKUP_TABLE_MAGIC_WORD && + strcmp(tbl->name, name) == 0) + return (odph_table_t)tbl; + + return NULL; +} + +odph_table_t +odph_iplookup_table_create( + const char *name, uint32_t ODP_IGNORED_1, + uint32_t ODP_IGNORED_2, uint32_t value_size) +{ + odph_iplookup_table_impl *tbl; + odp_shm_t shm_tbl; + odp_queue_t queue; + odp_queue_param_t qparam; + + unsigned i; + uint32_t impl_size, l1_size; + char queue_name[ODPH_TABLE_NAME_LEN + 2]; + + /* Check for valid parameters */ + if (strlen(name) == 0) { + ODPH_DBG("invalid parameters\n"); + return NULL; + } + + /* Guarantee there's no existing */ + tbl = (odph_iplookup_table_impl *)odph_iplookup_table_lookup(name); + if (tbl != NULL) { + ODPH_DBG("IP prefix table %s already exists\n", name); + return NULL; + } + + /* Calculate the sizes of different parts of IP prefix table */ + impl_size = sizeof(odph_iplookup_table_impl); + l1_size = ENTRY_SIZE * ENTRY_NUM_L1; + + shm_tbl = odp_shm_reserve( + name, impl_size + l1_size, + ODP_CACHE_LINE_SIZE, ODP_SHM_SW_ONLY); + + if (shm_tbl == ODP_SHM_INVALID) { + ODPH_DBG( + "shm allocation failed for odph_iplookup_table_impl %s\n", + name); + return NULL; + } + + tbl = (odph_iplookup_table_impl *)odp_shm_addr(shm_tbl); + memset(tbl, 0, impl_size + l1_size); + + /* header of this mem block is the table impl struct, + * then the l1 entries array. + */ + tbl->l1e = (prefix_entry_t *)((char *)tbl + impl_size); + for (i = 0; i < ENTRY_NUM_L1; i++) + tbl->l1e[i].nexthop = ODP_BUFFER_INVALID; + + /* Setup table context. */ + snprintf(tbl->name, sizeof(tbl->name), "%s", name); + tbl->magicword = ODPH_IP_LOOKUP_TABLE_MAGIC_WORD; + tbl->nexthop_len = value_size; + + /* Initialize cache */ + for (i = 0; i < 2; i++) { + tbl->cache_count[i] = 0; + + odp_queue_param_init(&qparam); + qparam.type = ODP_QUEUE_TYPE_PLAIN; + sprintf(queue_name, "%s_%d", name, i); + queue = odp_queue_create(queue_name, &qparam); + if (queue == ODP_QUEUE_INVALID) { + ODPH_DBG("failed to create queue"); + cache_destroy(tbl); + return NULL; + } + tbl->free_slots[i] = queue; + cache_alloc_new_pool(tbl, i); + } + + /* Initialize tire */ + if (trie_init(tbl) < 0) { + odp_shm_free(shm_tbl); + return NULL; + } + + return (odph_table_t)tbl; +} + +int +odph_iplookup_table_destroy(odph_table_t tbl) +{ + int i, j; + odph_iplookup_table_impl *impl = NULL; + prefix_entry_t *subtree = NULL; + odp_buffer_t *buff1 = NULL, *buff2 = NULL; + + if (tbl == NULL) + return -1; + + impl = (odph_iplookup_table_impl *)tbl; + + /* check magic word */ + if (impl->magicword != ODPH_IP_LOOKUP_TABLE_MAGIC_WORD) { + ODPH_DBG("wrong magicword for IP prefix table\n"); + return -1; + } + + /* destroy trie */ + trie_destroy(impl, impl->trie); + + /* free all L2 and L3 entries */ + buff1 = ENTRY_BUFF_ARR(impl->l1e); + for (i = 0; i < ENTRY_NUM_L1; i++) { + if ((impl->l1e[i]).child == 0) + continue; + + subtree = (prefix_entry_t *)impl->l1e[i].ptr; + buff2 = ENTRY_BUFF_ARR(subtree); + /* destroy all l3 subtrees of this l2 subtree */ + for (j = 0; j < ENTRY_NUM_SUBTREE; j++) { + if (subtree[j].child == 0) + continue; + odp_queue_enq( + impl->free_slots[CACHE_TYPE_TRIE], + odp_buffer_to_event(buff2[j])); + } + /* destroy this l2 subtree */ + odp_queue_enq( + impl->free_slots[CACHE_TYPE_TRIE], + odp_buffer_to_event(buff1[i])); + } + + /* destroy all cache */ + cache_destroy(impl); + + /* free impl */ + odp_shm_free(odp_shm_lookup(impl->name)); + return 0; +} + +/* Insert the prefix into level x + * Return: + * -1 error + * 0 the table is unmodified + * 1 the table is modified + */ +static int +prefix_insert_into_lx( + odph_iplookup_table_impl *tbl, prefix_entry_t *entry, + uint8_t cidr, odp_buffer_t nexthop, uint8_t level) +{ + uint8_t ret = 0; + uint32_t i = 0, limit = (1 << (level - cidr)); + prefix_entry_t *e = entry, *ne = NULL; + + for (i = 0; i < limit; i++, e++) { + if (e->child == 1) { + if (e->cidr > cidr) + continue; + + e->cidr = cidr; + /* push to next level */ + ne = (prefix_entry_t *)e->ptr; + ret = prefix_insert_into_lx( + tbl, ne, cidr, nexthop, cidr + 8); + } else { + if (e->cidr > cidr) + continue; + + e->child = 0; + e->cidr = cidr; + e->nexthop = nexthop; + ret = 1; + } + } + return ret; +} + +static int +prefix_insert_iter( + odph_iplookup_table_impl *tbl, prefix_entry_t *entry, + odp_buffer_t *buff, uint32_t ip, uint8_t cidr, + odp_buffer_t nexthop, uint8_t level, uint8_t depth) +{ + uint8_t state = 0; + prefix_entry_t *ne = NULL; + odp_buffer_t *nbuff = NULL; + + /* If child subtree is existed, get it. */ + if (entry->child) { + ne = (prefix_entry_t *)entry->ptr; + nbuff = ENTRY_BUFF_ARR(ne); + } else { + /* If the child is not existed, create a new subtree. */ + odp_buffer_t buf, push = entry->nexthop; + + buf = cache_get_buffer(tbl, CACHE_TYPE_SUBTREE); + if (buf == ODP_BUFFER_INVALID) { + ODPH_DBG("failed to get subtree buffer from cache.\n"); + return -1; + } + ne = (prefix_entry_t *)odp_buffer_addr(buf); + nbuff = ENTRY_BUFF_ARR(ne); + + entry->child = 1; + entry->ptr = ne; + *buff = buf; + + /* If this entry contains a nexthop and a small cidr, + * push it to the next level. + */ + if (entry->cidr > 0) { + state = prefix_insert_into_lx( + tbl, ne, entry->cidr, + push, entry->cidr + 8); + } + } + + ne += (ip >> 24); + nbuff += (ip >> 24); + if (cidr <= 8) { + state = prefix_insert_into_lx( + tbl, ne, cidr + depth * 8, nexthop, level); + } else { + state = prefix_insert_iter( + tbl, ne, nbuff, ip << 8, cidr - 8, + nexthop, level + 8, depth + 1); + } + + return state; +} + +int +odph_iplookup_table_put_value(odph_table_t tbl, void *key, void *value) +{ + if ((tbl == NULL) || (key == NULL) || (value == NULL)) + return -1; + + odph_iplookup_table_impl *impl = (odph_iplookup_table_impl *)tbl; + odph_iplookup_prefix_t *prefix = (odph_iplookup_prefix_t *)key; + prefix_entry_t *l1e = NULL; + odp_buffer_t nexthop = *((odp_buffer_t *)value); + int ret = 0; + + if (prefix->cidr == 0) + return -1; + prefix->ip = prefix->ip & (0xffffffff << (IP_LENGTH - prefix->cidr)); + + /* insert into trie */ + ret = trie_insert_node( + impl, impl->trie, + prefix->ip, prefix->cidr, nexthop); + + if (ret < 0) { + ODPH_DBG("failed to insert into trie\n"); + return -1; + } + + /* get L1 entry */ + l1e = &impl->l1e[prefix->ip >> 16]; + odp_buffer_t *buff = ENTRY_BUFF_ARR(impl->l1e) + (prefix->ip >> 16); + + if (prefix->cidr <= 16) { + ret = prefix_insert_into_lx( + impl, l1e, prefix->cidr, nexthop, 16); + } else { + ret = prefix_insert_iter( + impl, l1e, buff, + ((prefix->ip) << 16), prefix->cidr - 16, + nexthop, 24, 2); + } + + return ret; +} + +int +odph_iplookup_table_get_value( + odph_table_t tbl, void *key, void *buffer, uint32_t buffer_size) +{ + if ((tbl == NULL) || (key == NULL) || (buffer == NULL)) + return -EINVAL; + + odph_iplookup_table_impl *impl = (odph_iplookup_table_impl *)tbl; + uint32_t ip = *((uint32_t *)key); + prefix_entry_t *entry = &impl->l1e[ip >> 16]; + odp_buffer_t *buff = (odp_buffer_t *)buffer; + + if (entry == NULL) { + ODPH_DBG("failed to get L1 entry.\n"); + return -1; + } + + ip <<= 16; + while (entry->child) { + entry = (prefix_entry_t *)entry->ptr; + entry += ip >> 24; + ip <<= 8; + } + + /* copy data */ + if (entry->nexthop == ODP_BUFFER_INVALID) { + /* ONLY match the default prefix */ + printf("only match the default prefix\n"); + *buff = ODP_BUFFER_INVALID; + } else { + *buff = entry->nexthop; + } + + return 0; +} + +static int +prefix_delete_lx( + odph_iplookup_table_impl *tbl, prefix_entry_t *l1e, + odp_buffer_t *buff, uint8_t cidr, uint8_t over_cidr, + odp_buffer_t over_nexthop, uint8_t level) +{ + uint8_t ret, flag = 1; + prefix_entry_t *e = l1e; + odp_buffer_t *b = buff; + uint32_t i = 0, limit = 1 << (level - cidr); + + for (i = 0; i < limit; i++, e++, b++) { + if (e->child == 1) { + if (e->cidr > cidr) { + flag = 0; + continue; + } + + prefix_entry_t *ne = (prefix_entry_t *)e->ptr; + odp_buffer_t *nbuff = ENTRY_BUFF_ARR(ne); + + e->cidr = over_cidr; + ret = prefix_delete_lx( + tbl, ne, nbuff, cidr, over_cidr, + over_nexthop, cidr + 8); + + /* If ret == 1, the next 2^8 entries equal to + * (over_cidr, over_nexthop). In this case, we + * should not push the (over_cidr, over_nexthop) + * to the next level. In fact, we should recycle + * the next 2^8 entries. + */ + if (ret) { + /* destroy subtree */ + cache_init_buffer( + *b, CACHE_TYPE_SUBTREE, + ENTRY_SIZE * ENTRY_NUM_SUBTREE); + odp_queue_enq( + tbl->free_slots[CACHE_TYPE_SUBTREE], + odp_buffer_to_event(*b)); + e->child = 0; + e->nexthop = over_nexthop; + } else { + flag = 0; + } + } else { + if (e->cidr > cidr) { + flag = 0; + continue; + } else { + e->cidr = over_cidr; + e->nexthop = over_nexthop; + } + } + } + return flag; +} + +/* Check if the entry can be recycled. + * An entry can be recycled duo to two reasons: + * - all children of the entry are the same, + * - all children of the entry have a cidr smaller than the level + * bottom bound. + */ +static uint8_t +can_recycle(prefix_entry_t *e, uint32_t level) +{ + uint8_t recycle = 1; + int i = 1; + prefix_entry_t *ne = (prefix_entry_t *)e->ptr; + + if (ne->child) + return 0; + + uint8_t cidr = ne->cidr; + odp_buffer_t index = ne->nexthop; + + if (cidr > level) + return 0; + + ne++; + for (; i < 256; i++, ne++) { + if ( + ne->child != 0 || ne->cidr != cidr || + ne->nexthop != index) { + recycle = 0; + break; + } + } + return recycle; +} + +static uint8_t +prefix_delete_iter( + odph_iplookup_table_impl *tbl, prefix_entry_t *e, + odp_buffer_t *buff, uint32_t ip, uint8_t cidr, + uint8_t level, uint8_t depth) +{ + uint8_t ret = 0, over_cidr; + odp_buffer_t over_nexthop; + + trie_detect_overlap( + tbl->trie, ip, cidr + 8 * depth, level, + &over_cidr, &over_nexthop); + if (cidr > 8) { + prefix_entry_t *ne = + (prefix_entry_t *)e->ptr; + odp_buffer_t *nbuff = ENTRY_BUFF_ARR(ne); + + ne += ((uint32_t)(ip << level) >> 24); + nbuff += ((uint32_t)(ip << level) >> 24); + ret = prefix_delete_iter( + tbl, ne, nbuff, ip, cidr - 8, + level + 8, depth + 1); + + if (ret && can_recycle(e, level)) { + /* destroy subtree */ + cache_init_buffer( + *buff, CACHE_TYPE_SUBTREE, + ENTRY_SIZE * ENTRY_NUM_SUBTREE); + odp_queue_enq( + tbl->free_slots[CACHE_TYPE_SUBTREE], + odp_buffer_to_event(*buff)); + e->child = 0; + e->nexthop = over_nexthop; + e->cidr = over_cidr; + return 1; + } + return 0; + } + + ret = prefix_delete_lx( + tbl, e, buff, cidr + 8 * depth, + over_cidr, over_nexthop, level); + return ret; +} + +int +odph_iplookup_table_remove_value(odph_table_t tbl, void *key) +{ + if ((tbl == NULL) || (key == NULL)) + return -EINVAL; + + odph_iplookup_table_impl *impl = (odph_iplookup_table_impl *)tbl; + odph_iplookup_prefix_t *prefix = (odph_iplookup_prefix_t *)key; + uint32_t ip = prefix->ip; + uint8_t cidr = prefix->cidr; + + if (prefix->cidr < 0) + return -EINVAL; + + prefix_entry_t *entry = &impl->l1e[ip >> 16]; + odp_buffer_t *buff = ENTRY_BUFF_ARR(impl->l1e) + (ip >> 16); + uint8_t over_cidr, ret; + odp_buffer_t over_nexthop; + + trie_detect_overlap( + impl->trie, ip, cidr, 16, &over_cidr, &over_nexthop); + + if (cidr <= 16) { + prefix_delete_lx( + impl, entry, buff, cidr, over_cidr, over_nexthop, 16); + } else { + prefix_entry_t *ne = (prefix_entry_t *)entry->ptr; + odp_buffer_t *nbuff = ENTRY_BUFF_ARR(ne); + + ne += ((uint32_t)(ip << 16) >> 24); + nbuff += ((uint32_t)(ip << 16) >> 24); + ret = prefix_delete_iter(impl, ne, nbuff, ip, cidr - 16, 24, 2); + + if (ret && can_recycle(entry, 16)) { + /* destroy subtree */ + cache_init_buffer( + *buff, CACHE_TYPE_SUBTREE, + sizeof(prefix_entry_t) * ENTRY_NUM_SUBTREE); + odp_queue_enq( + impl->free_slots[CACHE_TYPE_SUBTREE], + odp_buffer_to_event(*buff)); + entry->child = 0; + entry->cidr = over_cidr; + entry->nexthop = over_nexthop; + } + } + + return trie_delete_node(impl, impl->trie, ip, cidr); +} + +odph_table_ops_t odph_iplookup_table_ops = { + odph_iplookup_table_create, + odph_iplookup_table_lookup, + odph_iplookup_table_destroy, + odph_iplookup_table_put_value, + odph_iplookup_table_get_value, + odph_iplookup_table_remove_value +}; diff --git a/helper/odph_iplookuptable.h b/helper/odph_iplookuptable.h new file mode 100644 index 0000000..0ae6b37 --- /dev/null +++ b/helper/odph_iplookuptable.h @@ -0,0 +1,58 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/** + * @file + * + * ODP IP Lookup Table + * + * This is an implementation of the IP lookup table. The key of + * this table is IPv4 address (32 bits), and the value can be + * defined by user. This table uses the 16,8,8 ip lookup (longest + * prefix matching) algorithm. + */ + +#ifndef ODPH_IPLOOKUP_TABLE_H_ +#define ODPH_IPLOOKUP_TABLE_H_ + +#include <odp/helper/table.h> + +#ifdef __cplusplus +extern "C" { +#endif + +typedef struct { + uint32_t ip; + uint8_t cidr; +} odph_iplookup_prefix_t; + +odph_table_t odph_iplookup_table_create( + const char *name, + uint32_t ODP_IGNORED_1, + uint32_t ODP_IGNORED_2, + uint32_t value_size); + +odph_table_t odph_iplookup_table_lookup(const char *name); + +int odph_iplookup_table_destroy(odph_table_t table); + +int odph_iplookup_table_put_value( + odph_table_t table, void *key, void *value); + +int odph_iplookup_table_get_value( + odph_table_t table, void *key, + void *buffer, uint32_t buffer_size); + +int odph_iplookup_table_remove_value( + odph_table_t table, void *key); + +extern odph_table_ops_t odph_iplookup_table_ops; + +#ifdef __cplusplus +} +#endif + +#endif /* ODPH_IPLOOKUP_TABLE_H_ */
commit 4a58145d0ca4e62ff41b052ed800acee7d0a97e1 Author: Ru Jia jiaru@ict.ac.cn Date: Mon Jun 13 14:42:14 2016 +0800
helper: test: add test of cuckoo hash table
This test program consists of basic validation tests and performance tests.
Signed-off-by: Ru Jia jiaru@ict.ac.cn Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/helper/test/.gitignore b/helper/test/.gitignore index 5ce3c3b..482fdb5 100644 --- a/helper/test/.gitignore +++ b/helper/test/.gitignore @@ -1,6 +1,7 @@ *.trs *.log chksum +cuckootable odpthreads parse process diff --git a/helper/test/Makefile.am b/helper/test/Makefile.am index 545db73..f7aa7e7 100644 --- a/helper/test/Makefile.am +++ b/helper/test/Makefile.am @@ -6,10 +6,11 @@ AM_LDFLAGS += -static TESTS_ENVIRONMENT += TEST_DIR=${builddir}
EXECUTABLES = chksum$(EXEEXT) \ + cuckootable$(EXEEXT) \ + table$(EXEEXT) \ thread$(EXEEXT) \ parse$(EXEEXT)\ - process$(EXEEXT)\ - table$(EXEEXT) + process$(EXEEXT)
COMPILE_ONLY = odpthreads
@@ -27,6 +28,7 @@ test_PROGRAMS = $(EXECUTABLES) $(COMPILE_ONLY) EXTRA_DIST = odpthreads_as_processes odpthreads_as_pthreads
dist_chksum_SOURCES = chksum.c +dist_cuckootable_SOURCES = cuckootable.c dist_odpthreads_SOURCES = odpthreads.c odpthreads_LDADD = $(LIB)/libodphelper-linux.la $(LIB)/libodp-linux.la dist_thread_SOURCES = thread.c diff --git a/helper/test/cuckootable.c b/helper/test/cuckootable.c new file mode 100644 index 0000000..5b4333b --- /dev/null +++ b/helper/test/cuckootable.c @@ -0,0 +1,573 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2016 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <stdio.h> +#include <stdint.h> +#include <string.h> +#include <stdlib.h> +#include <stdarg.h> +#include <errno.h> +#include <sys/queue.h> +#include <sys/time.h> +#include <time.h> + +#include <odp_api.h> +#include <test_debug.h> +#include <../odph_cuckootable.h> + +/******************************************************************************* + * Hash function performance test configuration section. + * + * The five arrays below control what tests are performed. Every combination + * from the array entries is tested. + */ +/******************************************************************************/ + +/* 5-tuple key type */ +struct flow_key { + uint32_t ip_src; + uint32_t ip_dst; + uint16_t port_src; + uint16_t port_dst; + uint8_t proto; +} __packed; + +/* + * Print out result of unit test hash operation. + */ +static void print_key_info( + const char *msg, const struct flow_key *key) +{ + const uint8_t *p = (const uint8_t *)key; + unsigned i; + + printf("%s key:0x", msg); + for (i = 0; i < sizeof(struct flow_key); i++) + printf("%02X", p[i]); + printf("\n"); +} + +static double get_time_diff(struct timeval *start, struct timeval *end) +{ + int sec = end->tv_sec - start->tv_sec; + int usec = end->tv_usec - start->tv_usec; + + if (usec < 0) { + sec--; + usec += 1000000; + } + double diff = sec + (double)usec / 1000000; + + return diff; +} + +/** Create IPv4 address */ +#define IPv4(a, b, c, d) ((uint32_t)(((a) & 0xff) << 24) | \ + (((b) & 0xff) << 16) | \ + (((c) & 0xff) << 8) | \ + ((d) & 0xff)) + +/* Keys used by unit test functions */ +static struct flow_key keys[5] = { { + .ip_src = IPv4(0x03, 0x02, 0x01, 0x00), + .ip_dst = IPv4(0x07, 0x06, 0x05, 0x04), + .port_src = 0x0908, + .port_dst = 0x0b0a, + .proto = 0x0c, +}, { + .ip_src = IPv4(0x13, 0x12, 0x11, 0x10), + .ip_dst = IPv4(0x17, 0x16, 0x15, 0x14), + .port_src = 0x1918, + .port_dst = 0x1b1a, + .proto = 0x1c, +}, { + .ip_src = IPv4(0x23, 0x22, 0x21, 0x20), + .ip_dst = IPv4(0x27, 0x26, 0x25, 0x24), + .port_src = 0x2928, + .port_dst = 0x2b2a, + .proto = 0x2c, +}, { + .ip_src = IPv4(0x33, 0x32, 0x31, 0x30), + .ip_dst = IPv4(0x37, 0x36, 0x35, 0x34), + .port_src = 0x3938, + .port_dst = 0x3b3a, + .proto = 0x3c, +}, { + .ip_src = IPv4(0x43, 0x42, 0x41, 0x40), + .ip_dst = IPv4(0x47, 0x46, 0x45, 0x44), + .port_src = 0x4948, + .port_dst = 0x4b4a, + .proto = 0x4c, +} }; + +/* + * Basic sequence of operations for a single key: + * - put + * - get (hit) + * - remove + * - get (miss) + */ +static int test_put_remove(void) +{ + odph_table_t table; + odph_table_ops_t *ops; + + ops = &odph_cuckoo_table_ops; + + /* test with standard put/get/remove functions */ + int ret; + + table = ops->f_create("put_remove", 10, sizeof(struct flow_key), 0); + if (table == NULL) { + printf("cuckoo hash table creation failed\n"); + return -1; + } + + ret = odph_cuckoo_table_put_value(table, &keys[0], NULL); + print_key_info("Add", &keys[0]); + if (ret < 0) { + printf("failed to add key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_get_value(table, &keys[0], NULL, 0); + print_key_info("Lkp", &keys[0]); + if (ret < 0) { + printf("failed to find key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_remove_value(table, &keys[0]); + print_key_info("Del", &keys[0]); + if (ret < 0) { + printf("failed to delete key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_get_value(table, &keys[0], NULL, 0); + print_key_info("Lkp", &keys[0]); + if (ret >= 0) { + printf("error: found key after deleting!\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + odph_cuckoo_table_destroy(table); + return 0; +} + +/* + * Sequence of operations for a single key: + * key type : struct flow_key + * value type: uint8_t + * - remove: miss + * - put + * - get: hit + * - put: update + * - get: hit (updated data) + * - remove: hit + * - remove: miss + */ +static int test_put_update_remove(void) +{ + odph_table_t table; + int ret; + uint8_t val1 = 1, val2 = 2, val = 0; + + table = odph_cuckoo_table_create( + "put_update_remove", + 10, sizeof(struct flow_key), sizeof(uint8_t)); + if (table == NULL) { + printf("failed to create table\n"); + return -1; + } + + ret = odph_cuckoo_table_remove_value(table, &keys[0]); + print_key_info("Del", &keys[0]); + if (ret >= 0) { + printf("error: found non-existent key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_put_value(table, &keys[0], &val1); + print_key_info("Add", &keys[0]); + if (ret < 0) { + printf("failed to add key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_get_value( + table, &keys[0], &val, sizeof(uint8_t)); + print_key_info("Lkp", &keys[0]); + if (ret < 0) { + printf("failed to find key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_put_value(table, &keys[0], &val2); + if (ret < 0) { + printf("failed to re-add key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_get_value( + table, &keys[0], &val, sizeof(uint8_t)); + print_key_info("Lkp", &keys[0]); + if (ret < 0) { + printf("failed to find key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_remove_value(table, &keys[0]); + print_key_info("Del", &keys[0]); + if (ret < 0) { + printf("failed to delete key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + ret = odph_cuckoo_table_remove_value(table, &keys[0]); + print_key_info("Del", &keys[0]); + if (ret >= 0) { + printf("error: deleted already deleted key\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + odph_cuckoo_table_destroy(table); + return 0; +} + +/* + * Sequence of operations for find existing hash table + * + * - create table + * - find existing table: hit + * - find non-existing table: miss + * + */ +static int test_table_lookup(void) +{ + odph_table_t table, result; + + /* Create cuckoo hash table. */ + table = odph_cuckoo_table_create("table_lookup", 10, 4, 0); + if (table == NULL) { + printf("failed to create table\n"); + return -1; + } + + /* Try to find existing hash table */ + result = odph_cuckoo_table_lookup("table_lookup"); + if (result != table) { + printf("error: could not find existing table\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + /* Try to find non-existing hash table */ + result = odph_cuckoo_table_lookup("non_existing"); + if (result != NULL) { + printf("error: found table that shouldn't exist.\n"); + odph_cuckoo_table_destroy(table); + return -1; + } + + /* Cleanup. */ + odph_cuckoo_table_destroy(table); + return 0; +} + +/* + * Sequence of operations for 5 keys + * - put keys + * - get keys: hit + * - remove keys : hit + * - get keys: miss + */ +static int test_five_keys(void) +{ + odph_table_t table; + unsigned i; + int ret; + + table = odph_cuckoo_table_create( + "five_keys", 10, sizeof(struct flow_key), 0); + if (table == NULL) { + printf("failed to create table\n"); + return -1; + } + + /* put */ + for (i = 0; i < 5; i++) { + ret = odph_cuckoo_table_put_value(table, &keys[i], NULL); + print_key_info("Add", &keys[i]); + if (ret < 0) { + printf("failed to add key %d\n", i); + odph_cuckoo_table_destroy(table); + return -1; + } + } + + /* get */ + for (i = 0; i < 5; i++) { + ret = odph_cuckoo_table_get_value(table, &keys[i], NULL, 0); + print_key_info("Lkp", &keys[i]); + if (ret < 0) { + printf("failed to find key %d\n", i); + odph_cuckoo_table_destroy(table); + return -1; + } + } + + /* remove */ + for (i = 0; i < 5; i++) { + ret = odph_cuckoo_table_remove_value(table, &keys[i]); + print_key_info("Del", &keys[i]); + if (ret < 0) { + printf("failed to delete key %d\n", i); + odph_cuckoo_table_destroy(table); + return -1; + } + } + + /* get */ + for (i = 0; i < 5; i++) { + ret = odph_cuckoo_table_get_value(table, &keys[i], NULL, 0); + print_key_info("Lkp", &keys[i]); + if (ret >= 0) { + printf("found non-existing key %d\n", i); + odph_cuckoo_table_destroy(table); + return -1; + } + } + + odph_cuckoo_table_destroy(table); + return 0; +} + +#define BUCKET_ENTRIES 4 +#define HASH_ENTRIES_MAX 1048576 +/* + * Do tests for cuchoo tabke creation with bad parameters. + */ +static int test_creation_with_bad_parameters(void) +{ + odph_table_t table; + + table = odph_cuckoo_table_create( + "bad_param_0", HASH_ENTRIES_MAX + 1, 4, 0); + if (table != NULL) { + odph_cuckoo_table_destroy(table); + printf("Impossible creating table successfully with entries in parameter exceeded\n"); + return -1; + } + + table = odph_cuckoo_table_create( + "bad_param_1", BUCKET_ENTRIES - 1, 4, 0); + if (table != NULL) { + odph_cuckoo_table_destroy(table); + printf("Impossible creating hash successfully if entries less than bucket_entries in parameter\n"); + return -1; + } + + table = odph_cuckoo_table_create("bad_param_2", 10, 0, 0); + if (table != NULL) { + odph_cuckoo_table_destroy(table); + printf("Impossible creating hash successfully if key_len in parameter is zero\n"); + return -1; + } + + printf("# Test successful. No more errors expected\n"); + + return 0; +} + +#define PERFORMANCE_CAPACITY 1000000 + +/* + * Test the performance of cuckoo hash table. + * table capacity : 1,000,000 + * key size : 4 bytes + * value size : 0 + * Insert at most number random keys into the table. If one + * insertion is failed, the rest insertions will be cancelled. + * The table utilization of the report will show actual number + * of items inserted. + * Then search all inserted items. + */ +static int test_performance(int number) +{ + odph_table_t table; + + /* generate random keys */ + uint8_t *key_space = NULL; + const void **key_ptr = NULL; + unsigned key_len = 4, j; + unsigned elem_num = (number > PERFORMANCE_CAPACITY) ? + PERFORMANCE_CAPACITY : number; + unsigned key_num = key_len * elem_num; + + key_space = (uint8_t *)malloc(key_num); + key_ptr = (const void **)malloc(sizeof(void *) * elem_num); + if (key_space == NULL) + return -ENOENT; + + for (j = 0; j < key_num; j++) { + key_space[j] = rand() % 255; + if (j % key_len == 0) + key_ptr[j / key_len] = &key_space[j]; + } + + unsigned num; + int ret = 0; + struct timeval start, end; + double add_time = 0; + + fflush(stdout); + table = odph_cuckoo_table_create( + "performance_test", PERFORMANCE_CAPACITY, key_len, 0); + if (table == NULL) { + printf("cuckoo table creation failed\n"); + return -ENOENT; + } + + /* insert (put) */ + gettimeofday(&start, 0); + for (j = 0; j < elem_num; j++) { + ret = odph_cuckoo_table_put_value( + table, &key_space[j * key_len], NULL); + if (ret < 0) + break; + } + gettimeofday(&end, 0); + num = j; + add_time = get_time_diff(&start, &end); + printf( + "add %u/%u (%.2f) items, time = %.9lfs\n", + num, PERFORMANCE_CAPACITY, + (double)num / PERFORMANCE_CAPACITY, add_time); + + /* search (get) */ + gettimeofday(&start, 0); + for (j = 0; j < num; j++) { + ret = odph_cuckoo_table_get_value( + table, &key_space[j * key_len], NULL, 0); + + if (ret < 0) + printf("lookup error\n"); + } + gettimeofday(&end, 0); + printf( + "lookup %u items, time = %.9lfs\n", + num, get_time_diff(&start, &end)); + + odph_cuckoo_table_destroy(table); + free(key_ptr); + free(key_space); + return ret; +} + +/* + * Do all unit and performance tests. + */ +static int +test_cuckoo_hash_table(void) +{ + if (test_put_remove() < 0) + return -1; + if (test_table_lookup() < 0) + return -1; + if (test_put_update_remove() < 0) + return -1; + if (test_five_keys() < 0) + return -1; + if (test_creation_with_bad_parameters() < 0) + return -1; + if (test_performance(950000) < 0) + return -1; + + return 0; +} + +int main(int argc TEST_UNUSED, char *argv[] TEST_UNUSED) +{ + odp_instance_t instance; + int ret = 0; + + ret = odp_init_global(&instance, NULL, NULL); + if (ret != 0) { + fprintf(stderr, "Error: ODP global init failed.\n"); + exit(EXIT_FAILURE); + } + + ret = odp_init_local(instance, ODP_THREAD_WORKER); + if (ret != 0) { + fprintf(stderr, "Error: ODP local init failed.\n"); + exit(EXIT_FAILURE); + } + + srand(time(0)); + ret = test_cuckoo_hash_table(); + + if (ret < 0) + printf("cuckoo hash table test fail!!\n"); + else + printf("All Tests pass!!\n"); + + if (odp_term_local()) { + fprintf(stderr, "Error: ODP local term failed.\n"); + exit(EXIT_FAILURE); + } + + if (odp_term_global(instance)) { + fprintf(stderr, "Error: ODP global term failed.\n"); + exit(EXIT_FAILURE); + } + + return ret; +}
commit b2d4275c2df6a7fcd9796fafed74b59292cee26e Author: Ru Jia jiaru@ict.ac.cn Date: Mon Jun 13 14:42:13 2016 +0800
helper: table: add impl of cuckoo hash table
Signed-off-by: Ru Jia jiaru@ict.ac.cn Reviewed-and-tested-by: Bill Fischofer bill.fischofer@linaro.org Signed-off-by: Maxim Uvarov maxim.uvarov@linaro.org
diff --git a/helper/Makefile.am b/helper/Makefile.am index d09d900..f7dd324 100644 --- a/helper/Makefile.am +++ b/helper/Makefile.am @@ -29,6 +29,7 @@ noinst_HEADERS = \ $(srcdir)/odph_debug.h \ $(srcdir)/odph_hashtable.h \ $(srcdir)/odph_lineartable.h \ + $(srcdir)/odph_cuckootable.h \ $(srcdir)/odph_list_internal.h
__LIB__libodphelper_linux_la_SOURCES = \ @@ -37,6 +38,7 @@ __LIB__libodphelper_linux_la_SOURCES = \ chksum.c \ linux.c \ hashtable.c \ - lineartable.c + lineartable.c \ + cuckootable.c
lib_LTLIBRARIES = $(LIB)/libodphelper-linux.la diff --git a/helper/cuckootable.c b/helper/cuckootable.c new file mode 100644 index 0000000..91a73b4 --- /dev/null +++ b/helper/cuckootable.c @@ -0,0 +1,743 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2016 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include <string.h> +#include <stdint.h> +#include <errno.h> +#include <stdio.h> + +#include "odph_cuckootable.h" +#include "odph_debug.h" +#include <odp_api.h> + +/* More efficient access to a map of single ullong */ +#define ULLONG_FOR_EACH_1(IDX, MAP) \ + for (; MAP && (((IDX) = __builtin_ctzll(MAP)), true); \ + MAP = (MAP & (MAP - 1))) + +/** @magic word, write to the first byte of the memory block + * to indicate this block is used by a cuckoo hash table + */ +#define ODPH_CUCKOO_TABLE_MAGIC_WORD 0xDFDFFDFD + +/** Number of items per bucket. */ +#define HASH_BUCKET_ENTRIES 4 + +#define NULL_SIGNATURE 0 +#define KEY_ALIGNMENT 16 + +/** Maximum size of hash table that can be created. */ +#define HASH_ENTRIES_MAX 1048576 + +/** @internal signature struct + * Structure storing both primary and secondary hashes + */ +struct cuckoo_table_signatures { + union { + struct { + uint32_t current; + uint32_t alt; + }; + uint64_t sig; + }; +}; + +/** @internal kay-value struct + * Structure that stores key-value pair + */ +struct cuckoo_table_key_value { + uint8_t *key; + uint8_t *value; +}; + +/** @internal bucket structure + * Put the elements with defferent keys but a same signature + * into a bucket, and each bucket has at most HASH_BUCKET_ENTRIES + * elements. + */ +struct cuckoo_table_bucket { + struct cuckoo_table_signatures signatures[HASH_BUCKET_ENTRIES]; + /* Includes dummy key index that always contains index 0 */ + odp_buffer_t key_buf[HASH_BUCKET_ENTRIES + 1]; + uint8_t flag[HASH_BUCKET_ENTRIES]; +} ODP_ALIGNED_CACHE; + +/* More efficient access to a map of single ullong */ +#define ULLONG_FOR_EACH_1(IDX, MAP) \ + for (; MAP && (((IDX) = __builtin_ctzll(MAP)), true); \ + MAP = (MAP & (MAP - 1))) + +/** A hash table structure. */ +typedef struct { + /**< for check */ + uint32_t magicword; + /**< Name of the hash. */ + char name[ODPH_TABLE_NAME_LEN]; + /**< Total table entries. */ + uint32_t entries; + /**< Number of buckets in table. */ + uint32_t num_buckets; + /**< Length of hash key. */ + uint32_t key_len; + /**< Length of value. */ + uint32_t value_len; + /**< Bitmask for getting bucket index from hash signature. */ + uint32_t bucket_bitmask; + /**< Queue that stores all free key-value slots*/ + odp_queue_t free_slots; + /** Table with buckets storing all the hash values and key indexes + to the key table*/ + struct cuckoo_table_bucket *buckets; +} odph_cuckoo_table_impl ODP_ALIGNED_CACHE; + +/** + * Aligns input parameter to the next power of 2 + * + * @param x + * The integer value to algin + * + * @return + * Input parameter aligned to the next power of 2 + */ +static inline uint32_t +align32pow2(uint32_t x) +{ + x--; + x |= x >> 1; + x |= x >> 2; + x |= x >> 4; + x |= x >> 8; + x |= x >> 16; + + return x + 1; +} + +/** + * Returns true if n is a power of 2 + * @param n + * Number to check + * @return 1 if true, 0 otherwise + */ +static inline int +is_power_of_2(uint32_t n) +{ + return n && !(n & (n - 1)); +} + +odph_table_t +odph_cuckoo_table_lookup(const char *name) +{ + odph_cuckoo_table_impl *tbl = NULL; + + if (name == NULL || strlen(name) >= ODPH_TABLE_NAME_LEN) + return NULL; + + tbl = (odph_cuckoo_table_impl *)odp_shm_addr(odp_shm_lookup(name)); + + if ( + tbl != NULL && + tbl->magicword == ODPH_CUCKOO_TABLE_MAGIC_WORD && + strcmp(tbl->name, name) == 0) + return (odph_table_t)tbl; +} + +odph_table_t +odph_cuckoo_table_create( + const char *name, uint32_t capacity, uint32_t key_size, + uint32_t value_size) +{ + odph_cuckoo_table_impl *tbl; + odp_shm_t shm_tbl; + + odp_pool_t pool; + odp_pool_param_t param; + + odp_queue_t queue; + odp_queue_param_t qparam; + + char pool_name[ODPH_TABLE_NAME_LEN + 3], + queue_name[ODPH_TABLE_NAME_LEN + 3]; + unsigned i; + uint32_t impl_size, kv_entry_size, + bucket_num, bucket_size; + + /* Check for valid parameters */ + if ( + (capacity > HASH_ENTRIES_MAX) || + (capacity < HASH_BUCKET_ENTRIES) || + (key_size == 0) || + (strlen(name) == 0)) { + ODPH_DBG("invalid parameters\n"); + return NULL; + } + + /* Guarantee there's no existing */ + tbl = (odph_cuckoo_table_impl *)odph_cuckoo_table_lookup(name); + if (tbl != NULL) { + ODPH_DBG("cuckoo hash table %s already exists\n", name); + return NULL; + } + + /* Calculate the sizes of different parts of cuckoo hash table */ + impl_size = sizeof(odph_cuckoo_table_impl); + kv_entry_size = sizeof(struct cuckoo_table_key_value) + + key_size + value_size; + + bucket_num = align32pow2(capacity) / HASH_BUCKET_ENTRIES; + bucket_size = bucket_num * sizeof(struct cuckoo_table_bucket); + + shm_tbl = odp_shm_reserve( + name, impl_size + bucket_size, + ODP_CACHE_LINE_SIZE, ODP_SHM_SW_ONLY); + + if (shm_tbl == ODP_SHM_INVALID) { + ODPH_DBG( + "shm allocation failed for odph_cuckoo_table_impl %s\n", + name); + return NULL; + } + + tbl = (odph_cuckoo_table_impl *)odp_shm_addr(shm_tbl); + memset(tbl, 0, impl_size + bucket_size); + + /* header of this mem block is the table impl struct, + * then the bucket pool. + */ + tbl->buckets = (struct cuckoo_table_bucket *)( + (char *)tbl + impl_size); + + /* initialize key-value buffer pool */ + snprintf(pool_name, sizeof(pool_name), "kv_%s", name); + pool = odp_pool_lookup(pool_name); + + if (pool != ODP_POOL_INVALID) + odp_pool_destroy(pool); + + param.type = ODP_POOL_BUFFER; + param.buf.size = kv_entry_size; + param.buf.align = ODP_CACHE_LINE_SIZE; + param.buf.num = capacity; + + pool = odp_pool_create(pool_name, ¶m); + + if (pool == ODP_POOL_INVALID) { + ODPH_DBG("failed to create key-value pool\n"); + odp_shm_free(shm_tbl); + return NULL; + } + + /* initialize free_slots queue */ + odp_queue_param_init(&qparam); + qparam.type = ODP_QUEUE_TYPE_PLAIN; + + snprintf(queue_name, sizeof(queue_name), "fs_%s", name); + queue = odp_queue_create(queue_name, &qparam); + if (queue == ODP_QUEUE_INVALID) { + ODPH_DBG("failed to create free_slots queue\n"); + odp_pool_destroy(pool); + odp_shm_free(shm_tbl); + return NULL; + } + + /* Setup hash context */ + snprintf(tbl->name, sizeof(tbl->name), "%s", name); + tbl->magicword = ODPH_CUCKOO_TABLE_MAGIC_WORD; + tbl->entries = capacity; + tbl->key_len = key_size; + tbl->value_len = value_size; + tbl->num_buckets = bucket_num; + tbl->bucket_bitmask = bucket_num - 1; + tbl->free_slots = queue; + + /* generate all free buffers, and put into queue */ + for (i = 0; i < capacity; i++) { + odp_event_t ev = odp_buffer_to_event( + odp_buffer_alloc(pool)); + if (ev == ODP_EVENT_INVALID) { + ODPH_DBG("failed to generate free slots\n"); + odph_cuckoo_table_destroy((odph_table_t)tbl); + return NULL; + } + + if (odp_queue_enq(queue, ev) < 0) { + ODPH_DBG("failed to enqueue free slots\n"); + odph_cuckoo_table_destroy((odph_table_t)tbl); + return NULL; + } + } + + return (odph_table_t)tbl; +} + +int +odph_cuckoo_table_destroy(odph_table_t tbl) +{ + int ret, i, j; + odph_cuckoo_table_impl *impl = NULL; + char pool_name[ODPH_TABLE_NAME_LEN + 3]; + + if (tbl == NULL) + return -1; + + impl = (odph_cuckoo_table_impl *)tbl; + + /* check magic word */ + if (impl->magicword != ODPH_CUCKOO_TABLE_MAGIC_WORD) { + ODPH_DBG("wrong magicword for cuckoo table\n"); + return -1; + } + + /* free all used buffers*/ + for (i = 0; i < impl->num_buckets; i++) { + for (j = 0; j < HASH_BUCKET_ENTRIES; j++) { + if (impl->buckets[i].signatures[j].current + != NULL_SIGNATURE) + odp_buffer_free(impl->buckets[i].key_buf[j]); + } + } + + /* free all free buffers */ + odp_event_t ev; + + while ((ev = odp_queue_deq(impl->free_slots)) + != ODP_EVENT_INVALID) { + odp_buffer_free(odp_buffer_from_event(ev)); + } + + /* destroy free_slots queue */ + ret = odp_queue_destroy(impl->free_slots); + if (ret < 0) + ODPH_DBG("failed to destroy free_slots queue\n"); + + /* destroy key-value pool */ + snprintf(pool_name, sizeof(pool_name), "kv_%s", impl->name); + ret = odp_pool_destroy(odp_pool_lookup(pool_name)); + if (ret != 0) { + ODPH_DBG("failed to destroy key-value buffer pool\n"); + return ret; + } + + /* free impl */ + odp_shm_free(odp_shm_lookup(impl->name)); +} + +static uint32_t hash(const odph_cuckoo_table_impl *h, const void *key) +{ + /* calc hash result by key */ + return odp_hash_crc32c(key, h->key_len, 0); +} + +/* Calc the secondary hash value from the primary hash value of a given key */ +static inline uint32_t +hash_secondary(const uint32_t primary_hash) +{ + static const unsigned all_bits_shift = 12; + static const unsigned alt_bits_xor = 0x5bd1e995; + + uint32_t tag = primary_hash >> all_bits_shift; + + return (primary_hash ^ ((tag + 1) * alt_bits_xor)); +} + +/* Search for an entry that can be pushed to its alternative location */ +static inline int +make_space_bucket( + const odph_cuckoo_table_impl *impl, + struct cuckoo_table_bucket *bkt) +{ + unsigned i, j; + int ret; + uint32_t next_bucket_idx; + struct cuckoo_table_bucket *next_bkt[HASH_BUCKET_ENTRIES]; + + /* + * Push existing item (search for bucket with space in + * alternative locations) to its alternative location + */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) { + /* Search for space in alternative locations */ + next_bucket_idx = bkt->signatures[i].alt & impl->bucket_bitmask; + next_bkt[i] = &impl->buckets[next_bucket_idx]; + for (j = 0; j < HASH_BUCKET_ENTRIES; j++) { + if (next_bkt[i]->signatures[j].sig == NULL_SIGNATURE) + break; + } + + if (j != HASH_BUCKET_ENTRIES) + break; + } + + /* Alternative location has spare room (end of recursive function) */ + if (i != HASH_BUCKET_ENTRIES) { + next_bkt[i]->signatures[j].alt = bkt->signatures[i].current; + next_bkt[i]->signatures[j].current = bkt->signatures[i].alt; + next_bkt[i]->key_buf[j] = bkt->key_buf[i]; + return i; + } + + /* Pick entry that has not been pushed yet */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) + if (bkt->flag[i] == 0) + break; + + /* All entries have been pushed, so entry cannot be added */ + if (i == HASH_BUCKET_ENTRIES) + return -ENOSPC; + + /* Set flag to indicate that this entry is going to be pushed */ + bkt->flag[i] = 1; + /* Need room in alternative bucket to insert the pushed entry */ + ret = make_space_bucket(impl, next_bkt[i]); + /* + * After recursive function. + * Clear flags and insert the pushed entry + * in its alternative location if successful, + * or return error + */ + bkt->flag[i] = 0; + if (ret >= 0) { + next_bkt[i]->signatures[ret].alt = bkt->signatures[i].current; + next_bkt[i]->signatures[ret].current = bkt->signatures[i].alt; + next_bkt[i]->key_buf[ret] = bkt->key_buf[i]; + return i; + } + + return ret; +} + +static inline int32_t +cuckoo_table_add_key_with_hash( + const odph_cuckoo_table_impl *h, const void *key, + uint32_t sig, void *data) +{ + uint32_t alt_hash; + uint32_t prim_bucket_idx, sec_bucket_idx; + unsigned i; + struct cuckoo_table_bucket *prim_bkt, *sec_bkt; + struct cuckoo_table_key_value *new_kv, *kv; + + odp_buffer_t new_buf; + int ret; + + prim_bucket_idx = sig & h->bucket_bitmask; + prim_bkt = &h->buckets[prim_bucket_idx]; + __builtin_prefetch((const void *)(uintptr_t)prim_bkt, 0, 3); + + alt_hash = hash_secondary(sig); + sec_bucket_idx = alt_hash & h->bucket_bitmask; + sec_bkt = &h->buckets[sec_bucket_idx]; + __builtin_prefetch((const void *)(uintptr_t)sec_bkt, 0, 3); + + /* Get a new slot for storing the new key */ + new_buf = odp_buffer_from_event(odp_queue_deq(h->free_slots)); + if (new_buf == ODP_BUFFER_INVALID) + return -ENOSPC; + + /* Check if key is already inserted in primary location */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) { + if ( + prim_bkt->signatures[i].current == sig && + prim_bkt->signatures[i].alt == alt_hash) { + kv = (struct cuckoo_table_key_value *)odp_buffer_addr( + prim_bkt->key_buf[i]); + if (memcmp(key, kv->key, h->key_len) == 0) { + odp_queue_enq( + h->free_slots, + odp_buffer_to_event(new_buf)); + /* Update data */ + if (kv->value != NULL) + memcpy(kv->value, data, h->value_len); + + /* Return bucket index */ + return prim_bucket_idx; + } + } + } + + /* Check if key is already inserted in secondary location */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) { + if ( + sec_bkt->signatures[i].alt == sig && + sec_bkt->signatures[i].current == alt_hash) { + kv = (struct cuckoo_table_key_value *)odp_buffer_addr( + sec_bkt->key_buf[i]); + if (memcmp(key, kv->key, h->key_len) == 0) { + odp_queue_enq( + h->free_slots, + odp_buffer_to_event(new_buf)); + /* Update data */ + if (kv->value != NULL) + memcpy(kv->value, data, h->value_len); + + /* Return bucket index */ + return sec_bucket_idx; + } + } + } + + new_kv = (struct cuckoo_table_key_value *)odp_buffer_addr(new_buf); + __builtin_prefetch((const void *)(uintptr_t)new_kv, 0, 3); + + /* Copy key and value. + * key-value mem block : struct cuckoo_table_key_value + * + key (key_len) + value (value_len) + */ + new_kv->key = (uint8_t *)new_kv + + sizeof(struct cuckoo_table_key_value); + memcpy(new_kv->key, key, h->key_len); + + if (h->value_len > 0) { + new_kv->value = new_kv->key + h->key_len; + memcpy(new_kv->value, data, h->value_len); + } else { + new_kv->value = NULL; + } + + /* Insert new entry is there is room in the primary bucket */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) { + /* Check if slot is available */ + if (odp_likely(prim_bkt->signatures[i].sig == NULL_SIGNATURE)) { + prim_bkt->signatures[i].current = sig; + prim_bkt->signatures[i].alt = alt_hash; + prim_bkt->key_buf[i] = new_buf; + return prim_bucket_idx; + } + } + + /* Primary bucket is full, so we need to make space for new entry */ + ret = make_space_bucket(h, prim_bkt); + + /* + * After recursive function. + * Insert the new entry in the position of the pushed entry + * if successful or return error and + * store the new slot back in the pool + */ + if (ret >= 0) { + prim_bkt->signatures[ret].current = sig; + prim_bkt->signatures[ret].alt = alt_hash; + prim_bkt->key_buf[ret] = new_buf; + return prim_bucket_idx; + } + + /* Error in addition, store new slot back in the free_slots */ + odp_queue_enq(h->free_slots, odp_buffer_to_event(new_buf)); + return ret; +} + +int +odph_cuckoo_table_put_value(odph_table_t tbl, void *key, void *value) +{ + if ((tbl == NULL) || (key == NULL)) + return -EINVAL; + + odph_cuckoo_table_impl *impl = (odph_cuckoo_table_impl *)tbl; + int ret = cuckoo_table_add_key_with_hash( + impl, key, hash(impl, key), value); + + if (ret < 0) + return -1; + + return 0; +} + +static inline int32_t +cuckoo_table_lookup_with_hash( + const odph_cuckoo_table_impl *h, const void *key, + uint32_t sig, void **data_ptr) +{ + uint32_t bucket_idx; + uint32_t alt_hash; + unsigned i; + struct cuckoo_table_bucket *bkt; + struct cuckoo_table_key_value *kv; + + bucket_idx = sig & h->bucket_bitmask; + bkt = &h->buckets[bucket_idx]; + + /* Check if key is in primary location */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) { + if ( + bkt->signatures[i].current == sig && + bkt->signatures[i].sig != NULL_SIGNATURE) { + kv = (struct cuckoo_table_key_value *)odp_buffer_addr( + bkt->key_buf[i]); + if (memcmp(key, kv->key, h->key_len) == 0) { + if (data_ptr != NULL) + *data_ptr = kv->value; + /* + * Return index where key is stored, + * subtracting the first dummy index + */ + return bucket_idx; + } + } + } + + /* Calculate secondary hash */ + alt_hash = hash_secondary(sig); + bucket_idx = alt_hash & h->bucket_bitmask; + bkt = &h->buckets[bucket_idx]; + + /* Check if key is in secondary location */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) { + if ( + bkt->signatures[i].current == alt_hash && + bkt->signatures[i].alt == sig) { + kv = (struct cuckoo_table_key_value *)odp_buffer_addr( + bkt->key_buf[i]); + if (memcmp(key, kv->key, h->key_len) == 0) { + if (data_ptr != NULL) + *data_ptr = kv->value; + /* + * Return index where key is stored, + * subtracting the first dummy index + */ + return bucket_idx; + } + } + } + + return -ENOENT; +} + +int +odph_cuckoo_table_get_value( + odph_table_t tbl, void *key, void *buffer, uint32_t buffer_size) +{ + if ((tbl == NULL) || (key == NULL)) + return -EINVAL; + + odph_cuckoo_table_impl *impl = (odph_cuckoo_table_impl *)tbl; + void *tmp = NULL; + int ret; + + ret = cuckoo_table_lookup_with_hash(impl, key, hash(impl, key), &tmp); + + if (ret < 0) + return -1; + + if (impl->value_len > 0) + memcpy(buffer, tmp, impl->value_len); + + return 0; +} + +static inline int32_t +cuckoo_table_del_key_with_hash( + const odph_cuckoo_table_impl *h, + const void *key, uint32_t sig) +{ + uint32_t bucket_idx; + uint32_t alt_hash; + unsigned i; + struct cuckoo_table_bucket *bkt; + struct cuckoo_table_key_value *kv; + + bucket_idx = sig & h->bucket_bitmask; + bkt = &h->buckets[bucket_idx]; + + /* Check if key is in primary location */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) { + if ( + bkt->signatures[i].current == sig && + bkt->signatures[i].sig != NULL_SIGNATURE) { + kv = (struct cuckoo_table_key_value *)odp_buffer_addr( + bkt->key_buf[i]); + if (memcmp(key, kv->key, h->key_len) == 0) { + bkt->signatures[i].sig = NULL_SIGNATURE; + odp_queue_enq( + h->free_slots, + odp_buffer_to_event( + bkt->key_buf[i])); + return bucket_idx; + } + } + } + + /* Calculate secondary hash */ + alt_hash = hash_secondary(sig); + bucket_idx = alt_hash & h->bucket_bitmask; + bkt = &h->buckets[bucket_idx]; + + /* Check if key is in secondary location */ + for (i = 0; i < HASH_BUCKET_ENTRIES; i++) { + if ( + bkt->signatures[i].current == alt_hash && + bkt->signatures[i].sig != NULL_SIGNATURE) { + kv = (struct cuckoo_table_key_value *)odp_buffer_addr( + bkt->key_buf[i]); + if (memcmp(key, kv->key, h->key_len) == 0) { + bkt->signatures[i].sig = NULL_SIGNATURE; + odp_queue_enq( + h->free_slots, + odp_buffer_to_event( + bkt->key_buf[i])); + return bucket_idx; + } + } + } + + return -ENOENT; +} + +int +odph_cuckoo_table_remove_value(odph_table_t tbl, void *key) +{ + if ((tbl == NULL) || (key == NULL)) + return -EINVAL; + + odph_cuckoo_table_impl *impl = (odph_cuckoo_table_impl *)tbl; + int ret = cuckoo_table_del_key_with_hash( + impl, key, hash(impl, key)); + + if (ret < 0) + return -1; + + return 0; +} + +odph_table_ops_t odph_cuckoo_table_ops = { + odph_cuckoo_table_create, + odph_cuckoo_table_lookup, + odph_cuckoo_table_destroy, + odph_cuckoo_table_put_value, + odph_cuckoo_table_get_value, + odph_cuckoo_table_remove_value +}; diff --git a/helper/odph_cuckootable.h b/helper/odph_cuckootable.h new file mode 100644 index 0000000..d569980 --- /dev/null +++ b/helper/odph_cuckootable.h @@ -0,0 +1,82 @@ +/* Copyright (c) 2016, Linaro Limited + * All rights reserved. + * + * SPDX-License-Identifier: BSD-3-Clause + */ + +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2016 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef ODPH_CUCKOO_TABLE_H_ +#define ODPH_CUCKOO_TABLE_H_ + +#include <odp/helper/table.h> + +/** + * @file + * + * ODP Cuckoo Hash Table + */ + +#ifdef __cplusplus +extern "C" { +#endif + +odph_table_t odph_cuckoo_table_create( + const char *name, + uint32_t capacity, + uint32_t key_size, + uint32_t value_size); + +odph_table_t odph_cuckoo_table_lookup(const char *name); + +int odph_cuckoo_table_destroy(odph_table_t table); + +int odph_cuckoo_table_put_value( + odph_table_t table, + void *key, void *value); + +int odph_cuckoo_table_get_value( + odph_table_t table, + void *key, void *buffer, + uint32_t buffer_size); + +int odph_cuckoo_table_remove_value(odph_table_t table, void *key); + +extern odph_table_ops_t odph_cuckoo_table_ops; + +#ifdef __cplusplus +} +#endif + +#endif /* ODPH_CUCKOO_TABLE_H_ */
-----------------------------------------------------------------------
Summary of changes: .travis.yml | 6 +- example/generator/odp_generator.c | 2 + example/ipsec/odp_ipsec.c | 26 ++ example/l3fwd/odp_l3fwd.c | 17 ++ example/switch/odp_switch.c | 5 + helper/include/odp/helper/ip.h | 11 +- helper/linux.c | 4 - scripts/build-pktio-dpdk | 4 +- test/common_plat/performance/odp_l2fwd.c | 302 +++++++++++---------- test/common_plat/performance/odp_pktio_perf.c | 8 + test/common_plat/validation/api/atomic/atomic.c | 24 ++ test/common_plat/validation/api/atomic/atomic.h | 1 + test/common_plat/validation/api/barrier/barrier.c | 24 ++ test/common_plat/validation/api/barrier/barrier.h | 1 + test/common_plat/validation/api/lock/lock.c | 24 ++ test/common_plat/validation/api/lock/lock.h | 1 + test/common_plat/validation/api/pktio/pktio.c | 9 +- .../validation/api/scheduler/scheduler.c | 9 + .../validation/api/traffic_mngr/traffic_mngr.c | 4 +- 19 files changed, 327 insertions(+), 155 deletions(-)
hooks/post-receive