lists.linaro.org
Sign In
Sign Up
Sign In
Sign Up
Manage this list
×
Keyboard Shortcuts
Thread View
j
: Next unread message
k
: Previous unread message
j a
: Jump to all threads
j l
: Jump to MailingList overview
2024
May
April
March
February
January
2023
December
November
October
September
August
July
June
May
April
March
February
January
2022
December
November
October
September
August
July
June
May
April
March
February
January
2021
December
November
October
September
August
July
June
May
April
March
February
List overview
Download
Acl-dev
May 2024
----- 2024 -----
May 2024
April 2024
March 2024
February 2024
January 2024
----- 2023 -----
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
----- 2022 -----
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
----- 2021 -----
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
acl-dev@lists.linaro.org
1 participants
1 discussions
Start a n
N
ew thread
Compute Library v24.04 is out!
by Michael Kozlov
Hello, The v24.04 release of Compute Library is out and comes with a collection of improvements and new features. Source code and prebuilt binaries are available at:
https://github.com/ARM-software/ComputeLibrary/releases/tag/v24.04
Highlights of the release: * Add Bfloat16 data type support for NEMatMul<
https://arm-software.github.io/ComputeLibrary/v24.04/classarm__compute_1_1_…
>. * Add support for SoftMax in SME2 for FP32 and FP16. * Add support for in place accumulation to CPU GEMM kernels. * Add low-precision Int8 * Int8 -> FP32 CPU GEMM which dequantizes after multiplication * Add is_dynamic flag to QuantizationInfo<
https://arm-software.github.io/ComputeLibrary/v24.04/classarm__compute_1_1_…
> to signal to operators that it may change after configuration * Performance optimizations: * Optimize start-up time of NEConvolutionLayer<
https://arm-software.github.io/ComputeLibrary/v24.04/classarm__compute_1_1_…
> for some input configurations where GeMM is selected as the convolution algorithm * Optimize NEConvolutionLayer<
https://arm-software.github.io/ComputeLibrary/v24.04/classarm__compute_1_1_…
> for input tensor size > 1e7 bytes and weight tensor height > 7 * Optimize NESoftmaxLayer<
https://arm-software.github.io/ComputeLibrary/v24.04/namespacearm__compute.…
> for axis != 0 by natively supporting higher axes up to axis 3. IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
2 weeks, 5 days
1
0
0
0
← Newer
1
Older →
Jump to page:
1
Results per page:
10
25
50
100
200