I followed the instructions here:
https://developer.arm.com/technologies/machine-learning-on-arm/developer-ma…
Both ArmCL and ArmNN are configured for arm64-v8a and neon.
I ran the UnitTests program which reported:
Running 842 test cases...
*** No errors detected
However, when testing with SimpleSample.cpp, I get this:
Please enter a number:
42
[2019-02-06 17:11:57.624459] [0x0000ffffa612a000] [info] ArmNN v20181100
[2019-02-06 17:11:57.624661] [0x0000ffffa612a000] [warning] ERROR: None of the preferred backends [CpuRef ] are supported. Current platform provides []
Segmentation fault
It isn't clear to me what might be wrong by looking at the source code.
Any hint would be appreciated.
Nicolas
Hello everybody,
Before we all go into Xmas mode and things start to fizzle out of my
head, here's a quick summary of my observations so far. Any comments
welcome.
To start with, Arm NN does not compile successfully with gcc version 8.2.1.
The first error to be hit is:
/home/nico/armnn/src/armnn/LayerSupport.cpp: In function ‘void armnn::{anonymous}::CopyErrorMessage(char*, const char*, size_t)’:
/home/nico/armnn/src/armnn/LayerSupport.cpp:30:21: error: ‘char* strncpy(char*, const char*, size_t)’ specified bound depends on the length of the source argument [-Werror=stringop-overflow=]
std::strncpy(truncatedString, fullString, copyLength);
~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/nico/armnn/src/armnn/LayerSupport.cpp:29:55: note: length computed here
size_t copyLength = std::min(maxLength, strlen(fullString));
~~~~~~^~~~~~~~~~~~
In function ‘void armnn::{anonymous}::CopyErrorMessage(char*, const char*, size_t)’,
inlined from ‘bool armnn::IsSpaceToBatchNdSupported(const armnn::BackendId&, const armnn::TensorInfo&, const armnn::TensorInfo&, const armnn::SpaceToBatchNdDescriptor&, char*, size_t)’ at /home/nico/armnn/src/armnn/LayerSupport.cpp:342:5:
/home/nico/armnn/src/armnn/LayerSupport.cpp:30:21: error: ‘char* strncpy(char*, const char*, size_t)’ specified bound depends on the length of the source argument [-Werror=stringop-overflow=]
std::strncpy(truncatedString, fullString, copyLength);
~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/nico/armnn/src/armnn/LayerSupport.cpp: In function ‘bool armnn::IsSpaceToBatchNdSupported(const armnn::BackendId&, const armnn::TensorInfo&, const armnn::TensorInfo&, const armnn::SpaceToBatchNdDescriptor&, char*, size_t)’:
/home/nico/armnn/src/armnn/LayerSupport.cpp:29:55: note: length computed here
size_t copyLength = std::min(maxLength, strlen(fullString));
~~~~~~^~~~~~~~~~~~
The build progresses a bit further when using -Wno-stringop-overflow.
However it then fails on this:
/home/nico/armnn/src/armnn/LayerSupport.cpp: In function ‘bool armnn::IsActivationSupported(const armnn::BackendId&, const armnn::TensorInfo&, const armnn::TensorInfo&, const armnn::ActivationDescriptor&, char*, size_t)’:
/home/nico/armnn/src/armnn/LayerSupport.cpp:60:39: error: catching polymorphic type ‘class armnn::InvalidArgumentException’ by value [-Werror=catch-value=]
} catch (InvalidArgumentException e) { \
^
/home/nico/armnn/src/armnn/LayerSupport.cpp:78:5: note: in expansion of macro ‘FORWARD_LAYER_SUPPORT_FUNC’
FORWARD_LAYER_SUPPORT_FUNC(backend, IsActivationSupported, input, output, descriptor);
^~~~~~~~~~~~~~~~~~~~~~~~~~
/home/nico/armnn/src/armnn/LayerSupport.cpp: In function ‘bool armnn::IsAdditionSupported(const armnn::BackendId&, const armnn::TensorInfo&, const armnn::TensorInfo&, const armnn::TensorInfo&, char*, size_t)’:
/home/nico/armnn/src/armnn/LayerSupport.cpp:60:39: error: catching polymorphic type ‘class armnn::InvalidArgumentException’ by value [-Werror=catch-value=]
} catch (InvalidArgumentException e) { \
^
/home/nico/armnn/src/armnn/LayerSupport.cpp:93:5: note: in expansion of macro ‘FORWARD_LAYER_SUPPORT_FUNC’
FORWARD_LAYER_SUPPORT_FUNC(backend, IsAdditionSupported, input0, input1, output);
^~~~~~~~~~~~~~~~~~~~~~~~~~
[...]
My C++-fu is not yet up to snuff to make sense of this, so I gave up and
moved the whole thing to a build environment with gcc version 6.3.0
instead where the build completed successfully. Would be a good idea if
someone could address the above errors properly.
Now looking at the binary size. I configured out all parsers and used
the smallest ACL config (no Neon, etc) to keep things simple. I got:
$ ls -l libarmnn.so
-rwxr-xr-x 1 nico nico 2816920 Dec 14 13:53 libarmnn.so
$ size libarmnn.so
text data bss dec hex filename
2080167 69088 2436 2151691 20d50b libarmnn.so
Finding out where that 2080167 bytes of text (which also includes
rodata) is distributed should be interesting.
After some scripting, I got the following list of symbols sorted by
their size:
Type Size Symbol
T 20288 armnn::IWorkloadFactory::IsLayerSupported(armnn::BackendId const&, armnn::IConnectableLayer const&, armnn::Optional<armnn::DataType>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&)
T 16288 _init
d 13840 typeinfo for boost::system::(anonymous namespace)::system_error_category
T 11568 armnn::Profiler::Print(std::ostream&) const
T 7784 armnn::RefLstmFloat32Workload::Execute() const
T 6056 armnn::Optimize(armnn::INetwork const&, std::vector<armnn::BackendId, std::allocator<armnn::BackendId> > const&, armnn::IDeviceSpec const&, armnn::OptimizerOptions const&, armnn::Optional<std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&>)
T 5344 armnn::StringifyLayerParameters<armnn::Pooling2dDescriptor>::Serialize(std::function<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>&, armnn::Pooling2dDescriptor const&)
T 5224 armnn::Graph::Print() const
T 5112 boost::thread::physical_concurrency()
T 4624 armnn::Graph::AddCopyLayers()
T 4528 armnn::LoadedNetwork::LoadedNetwork(std::unique_ptr<armnn::OptimizedNetwork, std::default_delete<armnn::OptimizedNetwork> >)
T 4520 armnn::Layer::VerifyLayerConnections(unsigned int, armnn::CheckLocation const&) const
T 4472 armnn::Runtime::UnloadNetwork(int)
T 4128 boost::log::v2s_mt_posix::attribute_name::get_id_from_string(char const*)
t 4096 e843419@002d_000018a1_5824
t 4092 e843419@007c_00003070_c
t 4092 e843419@0041_00002011_1ed0
T 4024 armnn::SubGraphSelector::SelectSubGraphs(armnn::Graph&, std::function<bool (armnn::Layer const&)> const&)
T 3864 armnn::RefBatchNormalizationUint8Workload::Execute() const
T 3776 armnn::RefConvolution2dUint8Workload::Execute() const
[...]
This shows a long list of symbols whose size follows a pretty regular
curve towards zero. In other words, there is no obvious outlier. The
first few symbols could be investigated for their largish size, but that
wouldn't make a significant dent in the total size.
However, there are 1688 symbols with a non-zero size. That corresponds
to an average of 1274 bytes per symbol which is not unreasonable. It's
the sheer amount of them that is overwhelming. Without the ability to
parse a model at compile time which would allow for static linking of
only the necessary ops then there is hardly no way to easily scale this
down.
Quick observation: the size of boost related symbols alone is 190416 bytes.
That's it for now. Once again, please feel free to comment.
Nicolas
Hi all,
I'd like to kick off discussion on this list with a topic that has come up in the ML working group meetings: dynamic loading of ArmNN binary backends.
What this is
Instead of compiling in support for various hardware into libarmnn.so, backends can be compiled separately into their own objects, and loaded later into ArmNN without recompiling.
For example, libarmnn.so might be compiled with just the reference backend, and the NEON and OpenCL backends could be provided as libarmnn-neon.so and libarmnn-opencl.so. At runtime, there would be some way for ArmNN to discover which backend .so files were available, and load them in to the running process.
Why do it
1. Better user experience when downloading Android apps. Compiled apps can link to libarmnn.so, and then choose which backends to bundle in variant optimised .apks for known devices. This is better than the current system where the application author would either need to be able to compile all the known backends in the world and link them all in to the application, or recompile the application for each desired combination of backend support.
2. Binary backend distribution. Backends might be hard to compile (need esoteric headers, vendor hardware support kits which are hard to work with, etc.) Having separate compilation for backends starts to open up the possibility that users could download ArmNN and the backends that they need without having to compile everything themselves.
3. Out-of-tree builds of backends. Backend developers can build armnn once, then iterate on the build of their backend without recompiling and redeploying the rest of ArmNN.
How
This is where the questions are, and I'd like to get some feel for what people think, and work together towards a good design. Here are some discussion points to start with.
Discovery mechanism:
Application provides the path to each backend to "armnn::LoadPlugin(const char* path)"?
Application provides a path to a directory and armnn tries to load all the .so files in that directory as plugins?
Config file?
All of the above?
Some other ways too?
Relationship to statically-compiled plugins:
Should this mechanism completely replace statically compiled plugins? How would that affect existing OEM deployments?
Is it sensible to make both techniques available?
Android:
Does dlopen() work in Android NDK apps?
How does LD_LIBRARY_PATH work for apps distributed in .apks?
Where should the backend libraries go in apks, and what about libraries that the backends need (e.g. libFabulousHardwareSupport.so)
Got any ideas about any of these questions, please reply and keep the list on cc for all discussion. We'll try to get a bit of discussion on each point, then pull things together into a design proposal.
Many thanks,
Matthew
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.