Dear ACL Development Team,
I am currently working on performing inference on ARM devices using an
object detection model with the Arm Compute Library (ACL). I have
successfully implemented inference for single images, obtaining correct
detections.
From the ACL documentation and examples I have reviewed, it appears the
library only supports input files in formats like npy, jpg, and ppm. I am
looking to implement real-time inference by feeding frames directly from a
camera. Could you please let me know if there is a recommended approach or
any existing functionality in ACL to achieve this?
Additionally, I have been exploring the ACL source code and am very
interested in working further with it. Any guidance or resources you could
provide would be greatly appreciated.
Thank you for your time and support.
Best regards,
Darshan B Y
Hello,
The 24.11.1 release of Compute Library is out and comes with a
collection of improvements and new features.
Source code and prebuilt binaries are available at:
[1]https://github.com/ARM-software/ComputeLibrary/releases/tag/v24.11.1
Highlights of the release:
* Add stateless GEMM execution via ICPPKernel::run_op
* TensorShape class supports dynamic shapes
* Add skeletons for Dynamic GEMM operator
* Convert Double rounding to Single rounding quantization behaviour
in both Cpu/Gpu backend
IMPORTANT NOTICE: The contents of this email and any attachments are
confidential and may also be privileged. If you are not the intended
recipient, please notify the sender immediately and do not disclose the
contents to any other person, use it for any purpose, or store or copy
the information in any medium. Thank you.
References
1. https://github.com/ARM-software/ComputeLibrary/releases/tag/v24.11.1