Hi All, Just giving a heads up:
Arm NN is in the process of moving from 'master' branch to 'main' branch. By end of 22.08 we will have completely moved to main. We will have main branch created along with 22.08 release branch. This should give enough time for other teams and our partners to switch to main with minimal damage. The current plan is to freeze master branch by 15th of August. Thanks
Note: master branch won't get deleted.
Thanks,
Nikhil.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Arm NN has removed the support for Tensorflow parser from its master. We won't be supporting Tensorflow from 21.05 release onwards. We are currently in the process of updating the documents.
Yours sincerely,
The Arm NN Team
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Arm NN has removed the support for Caffe parser from its master as it's no longer a widely used framework for machine learning as it once was. We won't be supporting Caffe from 21.05 release onwards. The documentations in our master will be updated accordingly after we remove support for other tools (armnnQuantizer and armnnTfParser) as well. This will be done in time for the Arm NN 21.05.
Yours Sincerely,
The Arm NN Team
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Ubuntu 16.04 LTS is reaching End of Life.
Ubuntu Linux 16.04 LTS will no longer be supported by April 30, 2021.
At that time, Ubuntu 16.04 LTS will no longer receive security patches or other software updates.
Consequently Arm NN will from the 21.08 Release at the end of August 2021 no longer be officially
supported on Ubuntu 16.04 LTS but will instead be supported on Ubuntu 18.04 LTS.
Yours Sincerely
The Arm NN Team
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
The ArmNN team is pleased to announce the release of ArmNN 21.02.
ArmNN 21.02 Release Notes
Summary
The 21.02 Release provides two major pieces of functionality: one performance related, namely the ability to cache compiled OpenCL kernels when running on the GPU backend. Cached kernel files can be loaded into the runtime eliminating the cost of compiling their associated graphs resulting in significant performance uplift on first execution of a newly loaded graph. The second is that the operators which were not added to the ArmNN Tensorflow Lite delegate in the 20.11 release are now there giving the delegate the same level of operator support as the android-nn-driver.
The other features of the 21.02 release are updating the Tensorflow Lite parser to work with Tensorflow Lite v2.3.1 and changes to the public APIs to make binary compatibility between releases easier to maintain. Each group of public interfaces SDK, backend, TfLiteDelegate etc. have been separately versioned and will have their version independently updated in subsequent releases to indicate changes in their Application Binary Interface (ABI).
Support has also been added for the SSD-MobileNetv2 and SSD-MobileNetv3 models. The models have been verified to execute correctly with good performance. Work to generate accuracy figures for the models using the tensorflow lite coco_object_detection tool is on-going and will be published when complete.
Two configuration options for the CpuAcc backend have been added one to specify the number of threads to use when executing ML workloads on the CPU the other to load an MLGO tuning file to increase the performance of GEMM operations on the CPU.
ArmNN SDK
New Features:
* Added ability to save and load the ClContext through ExecuteNetwork and the Android-nn-driver.
* This will remove the time taken for initial compilation of OpenCL kernels and speed up the first execution.
* Semantic Versioning for ArmNN APIs
* Arm NN TfLite Delegate (more extensive details in Arm NN TfLite Delegate section)
* Further operator support
* Add capability to build on Android
* Verification of Support of SSD-MobileNetv2 & SSD-MobileNetv2
TfLite Parser:
* Added DEPTH_TO_SPACE operator support
* Added GATHER operator support
* Added SUM operator support
* Added REDUCE_MAX, REDUCE_MIN operator support
Tf Parser:
* Added support for ELU activation
* Support Dilation in Conv2D
ONNX Parser:
* Support Dilation in Conv2D
Caffe Parser:
* Added Dilation support
* Added argmax deconv support
ArmNN Serializer
* Serialise ArmNN Model on android-nn-driver
ExecuteNetwork App Changes:
* Two optimization parameters were added to enable saving and loading of the ClContext.
* --save-cached-network
* --cached-network-filepath
Other changes:
* Make it easier for backends to traverse the subgraph during optimization by sorting Subgraphview layers on construction
* Added CL/NEON implementation of RANK Workload
* Added REDUCE layer for REDUCE_MAX, REDUCE_MIN, REDUCE_SUM operators
* Added REDUCE_MAX, REDUCE_MIN, and REDUCE_SUM operator support CpuRef Backend
* Added REDUCE_MAX, REDUCE_MIN, and REDUCE_SUM operator support/workload CpuAcc Backend
* Added REDUCE_MAX, REDUCE_MIN, and REDUCE_SUM operator support/workload GpuAcc Backend
* Added more Fused Activation unit tests
* Handle Neon optionality on 32 bit linux platforms
* Validated MobileNetv2-SSD and MobileNetv3-SSD support (further details in executive summary)
* Add CpuAcc specific configuration option numberOfThreads
* Add GpuAcc MLGO tuning file configuration argument
Bug Fixes:
* Default stride values in depthwise and convolution to 1 instead of 0
* Fixed transpose conv InferOutputShape
* Fix incorrect padding value for asymmetric quantized type
* Fix build breaks for armnnDeserializer test and Threads.cpp for macosx.
* Further fix for macosx where filenames are case insensitive
* Unittest failure on mipsel/s390x/ppc64/powerpc
* ArmnnQuantizer incorrectly Quantizes all DataTypes
* Fixed TFLite parser not parsing TransposeConvolution
* Fix TfLite parser and ExecuteNetwork issues where error was not thrown in some cases
* Fix wav2letter not producing correct output for Neon backend
* Fix ReduceLayer InferOutputShape issue where the correct axis data will be read in TfLiteParser
* Fix Reduce workload to allow input tensors of any rank into the validate function
* Updated JsonPrinterTestImpl to use CpuLogitsDLogSoftmaxKernel_#
* Add missing serializer support for m_DimensionsSpecificity
* Removed unnecessary friend function in INetwork and fixed TransformIterator operator= to allow compilation on further compilers
Known issues:
Deprecation Notification:
The following components have been deprecated and will be removed in the next (21.05) release of ArmNN
* armnnQuantizer
Now that the Tensorflow Lite Converter (https://www.tensorflow.org/lite/convert/) has mature post training quantization capabilities the need for this component has gone.
See: https://www.tensorflow.org/model_optimization/guide/quantization/post_train…andhttps://www.tensorflow.org/lite/performance/post_training_quantization for more details.
* armnnTfParser
As Tensorflow Lite is our current recommended deployment environment for ArmNN and the Tensorflow Lite Converter provides a path for converting most common machine learning
models into Tensorflow Lite format the need for a Tensorflow parser has gone.
* armnnCaffeParser
Caffe is no longer as widely used as a framework for machine learning as it once was.
TfLite Delegate
New Features:
* Enabled ELU Activation
* Enabled HARD_SWISH Activation
* Added GATHER operator support
* Added Logical AND, NOT and OR operator support.
* Added PAD operator support
* Added PADV2 operator support
* Added SPLIT operator support
* Added SPLIT_V operator support
* Added ARG_MAX operator support
* Added ARG_MIN operator support
* Added LOCAL_RESPONSE_NORMALIZATION operator support
* Added L2_NORMALIZATION operator support
* Added BATCH_TO_SPACE_ND operator support
* Added SPACE_TO_BATCH_ND operator support
* Added DEPTH_TO_SPACE operator support
* Added SPACE_TO_DEPTH operator support
* Added SUM operator support
* Added REDUCE_MAX, REDUCE_MIN operator support
* Added FLOOR operator support
* Added OptimizerOptions
* Reduce Float32 to Float16
* Reduce Float32 to BFloat16
* Enable debug data
* Enable memory import
* Added STRIDED_SLICE operator support
* Added LSTM operator support
Other Changes:
* Provided Android build
* Removed Tensorflow requirement
Bug Fixes:
* Fixed fused activation in Fully Connected layer
* Fixed TfLiteDelegate Reshape operator failure when running models with 2D shape tensor.
Known Issues:
Android NNAPI driver
Deprecated features:
New Features:
* if "-request-inputs-and-outputs-dump-dir" is enabled it will serialize the network graph to a ".armnn" file to given directory
* Added ability to save and load the ClContext through Android-nn-driver.
* Two optimization parameters were added to enable:
*
"q,cached-network-file", "If non-empty, the given file will be used to load/save cached network. "
"If save-cached-network option is given will save the cached network to given file."
"If save-cached-network option is not given will load the cached network from given "
"file."
* "s,save-cached-network", "Enables saving the cached network to the file given with cached-network-file option."
Other Changes:
* Provide LayerSupportHandle to frontend users
* Update setup and Android.bp files to build v8.2a driver
* Add CpuAcc specific configuration option numberOfThreads
* Add GpuAcc MLGO tuning file configuration argument
Build Dependencies
Git 2.17.1 or later
SCons 2.4.1 (Ubuntu) and 2.5.1 (Debian)
CMake
3.5.1 (Ubuntu) and 3.7.2 (Debian)
Acl
branches/arm_compute_21_02
android-nn-driver
branches/android-nn-driver_21_02
npu backend
boost 1.64
Tensorflow
2.3.1
Caffe
tag 1.0
Onnx
1.6.0
Flatbuffer 1.12.0
Protobuf
3.12.0
Eigen3
3.3
Android 10 & 11
Mali Driver r26p0_01eac0
Android NDK
r20b
mapbox/variant 1.2.0
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
To whom it may concern
The following components have been deprecated and will be removed in the next (21.05) release of ArmNN* armnnQuantizer Now that the Tensorflow Lite Converter (https://www.tensorflow.org/lite/convert/)
has mature post training quantization capabilities the need for this component has gone. See: https://www.tensorflow.org/model_optimization/guide/quantization/post_train… and
https://www.tensorflow.org/lite/performance/post_training_quantization for more details.* armnnTfParser As Tensorflow Lite is our current recommended deployment environment for ArmNN and
the Tensorflow Lite Converter provides a path for converting most common machine learning
models into Tensorflow Lite format the need for a Tensorflow parser has gone.* armnnCaffeParser Caffe is no longer as widely used a framework for machine learning as it once was.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi!
For PyArmNN (currently under development and planned for 20.05), we decided to move all the unit test resources such as json, npy files or models (onnx, tf, tflite, caffe) to http://snapshots.linaro.org/ so that they are not stored in the git repository and they are still publicly available.
A similar issue is there for the binaries in "tests" e.g. TfLiteMobilenetQuantized-Armnn. Yes, most of the models are available publicly, but some are either harder to find or are not available at all. Would it be viable to have all the resources required to run all the tests either on http://snapshots.linaro.org/ or https://releases.linaro.org/ ? (... and provide a download script or add it to the README)
Thanks!
Pavel
Hi all,
I would like to test the communities appetite for deprecating the Tensorflow and Caffe parsers. This would free-up some dev and test resource to focus on potentially more relevant features. The .armnn, .tflite and .onnx formats would continue to be supported and actively developed as viable alternative routes into ArmNN.
I would be interested to know if you think this move would significantly, negatively affect any known/existing workflows with ArmNN. Any thoughts or comments are welcome.
Thanks,
Derek
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
[Resending due to mail bounce]
Hi Pavel,
Thanks for your email, I've modified this response to more clearly indicate what we would like from a design pov and to more clearly indicate that much of this work is open for community contributions.
It should be fine to move PyArmNN into armnn master with a few small modifications. Ideally, I'd rather not have any generated files checked-in to the repository, however any scripts to execute the generation command can be checked-in. In order to remove the hard dependency on Tox (I think we are in agreement on this), it would be best to move the generation commands into a separate stand-alone (bash or other?) script or scripts which the Tox script then calls directly, and make the generation an optional build step in the cmake (which can also then call these same generation scripts). Users and contributors can then regenerate the bindings as needed without requiring Tox. Tox can therefore remain as a purely optional convenience for testing against multiple python versions (which we will use in our internal CI). So to answer one of your questions, at least initially, we would like PyArmNN to be introduced purely as source which can then be built for the target machine. So the work required would be as follows:
1. Refactor the a) swig code gen commands and b) subsequent python source packaging commands from Tox into stand-alone scripts which Tox then calls.
2. Add an optional build step in CMake to generate PyArmNN source files and python source package.
Does that sound reasonable?
One of my other main concerns in the short-term, is that the PyArmNN interface is kept up-to-date and working with the changing code base. We don't currently have tests in place to ensure the PyArmNN interface is a) stable and b) kept up-to-date as new interfaces are added to ArmNN. If possible, I would like to see these tests run on every check-in via the Linaro build-robot. We have not planned or scheduled any effort towards this so if you can pick this up, that would be most valuable.
Making PyArmNN available via the package managers is an aspiration we would like to work towards, but there are some things we'd like to achieve first on that path. Mainly, ArmNN (and the corresponding PyArmNN) are currently strongly bound to a particular release version of the API. I would like to make stronger API & ABI guarantees than we currently do for the ArmNN frontend. This would probably require a stable C like interface in place....similar in fact to what PyArmNN is doing now. This would enable us to introduce semantic versioning which would enable ArmNN to work better as a system library, possibly even distributed as a debian package (we are working on the debian packaging atm). I'd also like to apply the same paradigm to the backend interface, but this is a MUCH larger and scarier endeavor and I'm not satisfied that the backend API is mature/stable enough for that just yet. We haven't scheduled this yet so it is useful work that anyone can pick-up. At the very least, I'd like to start some open dialog to find out a) how important this is for the community, and b) how we can get to that point.
Making PyArmNN available as WHL, as I understand it, requires prebuilt binaries for all the potential targets, which introduces a host of headaches which we would like to avoid at the moment but this might become easier if Arm NN is required as a system library with a stable ABI. I'm keen to get other opinions on this though.
@Alexander can correct me if I'm wrong an any of this, but I believe with the source package, you can use "pip install" already. You just have to point it to the generated source package rather than using the package manager. Given the limitations with ABI stability, I think publishing it on pypi.org will introduce a constant ongoing maintenance cost which we would rather solve by getting some of the fundamentals (ie weaker version dependencies) sorted first.
Also, on the documentation, we plan to make them available via the github pages. I think Alexander and his team have done an awesome job on the PyArmNN docs.
If you have more questions or suggestions, or if any of this doesn't work for you, let us know.
Regards,
Derek
From: Matthew Bentham
Sent: 12 February 2020 13:38
To: Pavel Macenauer; armnn-dev(a)lists.linaro.org; Georgios Pinitas; Derek Lamberti; Alexander Efremov
Subject: Re: pyarmnn integration
+George, Derek, Alexander,
Please can you guys help Pavel? And keep the list on cc for visibility please.
Many thanks,
Matthew
________________________________
From: Armnn-dev <armnn-dev-bounces(a)lists.linaro.org> on behalf of Pavel Macenauer <pavel.macenauer(a)nxp.com>
Sent: 10 February 2020 21:00
To: armnn-dev(a)lists.linaro.org <armnn-dev(a)lists.linaro.org>
Subject: [Armnn-dev] pyarmnn integration
Hi!
There is a branch experimental/pyarmnn, created by Matthew Bentham, which contains python wrappers for armnn, which initially seems to work pretty well - building a whl archive works, which can be installed using pip and I was able to write an example, which runs inference on a float/quantized model and using all the supported frameworks - tf, tf-lite, caffe, onnx as well. What is missing is to get the python wrappers integrated, run and check unit tests and write a few examples. We discussed this with Matthew already, but I would be glad to hear more opinions regarding how we proceed and to kick off a discussion.
1. How to integrate pyarmnn?
There are 2 paths initially:
1. Build pyarmnn together with armnn using a single cmake command
* By default it would be turned off, otherwise it would be build using e.g. -DBUILD_PYARMNN
* The product is either a whl or a src package - so should there be 2 options e.g. -DBUILD_PYARMNN_SRC, -DBUILD_PYARMNN_WHL or only a single one, which would always build both?
2. Separate pyarmnn from armnn into a different repository (and keep it as a separate project)
* Additionally to a) options -DARMNN_LIB and -DARMNN_INCLUDE would be required as well, so that it can be "linked" against configurable armnn build
The difference is mainly in maintainability - a) forces to maintain pyarmnn and update the swig files to generate wrappers per every release b) on the other hand keeps the project separate, allows to build pyarmnn with a configurable armnn release and doesn't create a dependency to update the swig files whenever armnn interface changes a little.
1. Remove tox? Yes/No - Tox is a python automation library, which is used to generate the wrappers or to run unit tests. It is not really needed, because the wrappers can be generated directly using swig and the src/whl packages generated using python/setuptools and it just creates another dependency. Unit tests can also be run directly using python.
2. Get pyarmnn published on pypi.org? Yes/No - we would be able to install pyarmnn using "pip install pyarmnn"
Any additional ideas, comments, feedback etc. would be of course appreciated.
Thanks!
Pavel M
_______________________________________________
Armnn-dev mailing list
Armnn-dev(a)lists.linaro.org
https://lists.linaro.org/mailman/listinfo/armnn-dev
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi all,
I'd like to poll the communities interest in having a stable API/ABI with strong semantic versioning guarantees. There are two levels where this could be applied.
1. Frontend API
2. Backend API
At present, any SW using ArmNN has to be built against a specific release version and all backends used have to be built for that same specific release version of ArmNN.
Achieving (1) would allow ArmNN libraries to be installed on the system more readily and any SW using ArmNN could target a specific ArmNN API version which could be compatible with multiple release versions of ArmNN.
Achieving (2) would allow backends to be distributed separately and work with a wider array of ArmNN versions.
Any thoughts or feedback are most welcome.
Thanks,
Derek
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.