Dear Gunes and Pablo,
I noticed in the comments of the issue #1084 https://github.com/ARM-software/ComputeLibrary/issues/1084 that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169 https://github.com/ARM-software/ComputeLibrary/issues/1169 is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo

Hi Matteo,
Thanks for your contribution last week and sharing this design document with us. We'll have a look at it and get back to you with some feedback.
Sparse tensors support aligns well with the roadmap for ACL and we would be happy to review and take in your patch implementing this feature.
Hope this helps ________________________________ From: Matteo Nicoli matteo.nicoli001@gmail.com Sent: 30 June 2025 17:33 To: Gunes Bayir Gunes.Bayir@arm.com; Pablo Tello Pablo.Tello@arm.com Cc: acl-dev@lists.linaro.org acl-dev@lists.linaro.org Subject: A draft for sparse tensors design
Dear Gunes and Pablo,
I noticed in the comments of the issue #1084https://github.com/ARM-software/ComputeLibrary/issues/1084 that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169https://github.com/ARM-software/ComputeLibrary/issues/1169 is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Thank you Pablo, look forward to hearing from you.
Matteo
On Jul 2, 2025, at 10:54 AM, Pablo Tello Pablo.Tello@arm.com wrote:
Hi Matteo,
Thanks for your contribution last week and sharing this design document with us. We'll have a look at it and get back to you with some feedback.
Sparse tensors support aligns well with the roadmap for ACL and we would be happy to review and take in your patch implementing this feature.
Hope this helps From: Matteo Nicoli <matteo.nicoli001@gmail.com mailto:matteo.nicoli001@gmail.com> Sent: 30 June 2025 17:33 To: Gunes Bayir <Gunes.Bayir@arm.com mailto:Gunes.Bayir@arm.com>; Pablo Tello <Pablo.Tello@arm.com mailto:Pablo.Tello@arm.com> Cc: acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org <acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org> Subject: A draft for sparse tensors design Dear Gunes and Pablo,
I noticed in the comments of the issue #1084 https://github.com/ARM-software/ComputeLibrary/issues/1084 that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169 https://github.com/ARM-software/ComputeLibrary/issues/1169 is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi Matteo,
I had a quick look at your document; I liked it and I think it's a good starting point. One thing I see is missing in the doc is the testing strategy for sparse tensors: we'll need to think carefully what new tests need to be implemented for these new tensors.
Some comments about your doc:
1. Any new methods like is_sparse() should be added to ITensorInfo rather than ITensor 2. We already have data_layout() method which returns NHWC/NCHW. See https://github.com/ARM-software/ComputeLibrary/blob/main/arm_compute/core/IT... . For sparse tensors I'd use a different name for the new method describing the sparse format, something like sparse_layout()/format()
Hope this helps
________________________________ From: Matteo Nicoli matteo.nicoli001@gmail.com Sent: 02 July 2025 11:30 To: Pablo Tello Pablo.Tello@arm.com Cc: Gunes Bayir Gunes.Bayir@arm.com; acl-dev@lists.linaro.org acl-dev@lists.linaro.org Subject: Re: A draft for sparse tensors design
Thank you Pablo, look forward to hearing from you.
Matteo
On Jul 2, 2025, at 10:54 AM, Pablo Tello Pablo.Tello@arm.com wrote:
Hi Matteo,
Thanks for your contribution last week and sharing this design document with us. We'll have a look at it and get back to you with some feedback.
Sparse tensors support aligns well with the roadmap for ACL and we would be happy to review and take in your patch implementing this feature.
Hope this helps ________________________________ From: Matteo Nicoli <matteo.nicoli001@gmail.commailto:matteo.nicoli001@gmail.com> Sent: 30 June 2025 17:33 To: Gunes Bayir <Gunes.Bayir@arm.commailto:Gunes.Bayir@arm.com>; Pablo Tello <Pablo.Tello@arm.commailto:Pablo.Tello@arm.com> Cc: acl-dev@lists.linaro.orgmailto:acl-dev@lists.linaro.org <acl-dev@lists.linaro.orgmailto:acl-dev@lists.linaro.org> Subject: A draft for sparse tensors design
Dear Gunes and Pablo,
I noticed in the comments of the issue #1084https://github.com/ARM-software/ComputeLibrary/issues/1084 that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169https://github.com/ARM-software/ComputeLibrary/issues/1169 is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi Matteo,
It would be great if we could implement this feature in ACL. We have not planed implementing sparse tensors because the resources have been moved somewhere else but we are happy support you in implementing this and provide guidance.
I reviewed your design draft and it looks good. We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Hope this helps
________________________________ From: Matteo Nicoli matteo.nicoli001@gmail.com Sent: 30 June 2025 17:33 To: Gunes Bayir Gunes.Bayir@arm.com; Pablo Tello Pablo.Tello@arm.com Cc: acl-dev@lists.linaro.org acl-dev@lists.linaro.org Subject: [Acl-dev] A draft for sparse tensors design
Dear Gunes and Pablo,
I noticed in the comments of the issue #1084 https://github.com/ARM-software/ComputeLibrary/issues/1084 that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169 https://github.com/ARM-software/ComputeLibrary/issues/1169 is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo
 _______________________________________________ Acl-dev mailing list -- acl-dev@lists.linaro.org To unsubscribe send an email to acl-dev-leave@lists.linaro.org IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi Pablo,
We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Okay, I was thinking about this:
Implement the SparseTensor data structures. Support basic (unary) operators like Transpose. Implement kernel support.
The most time-consuming task is definitely point 3, so I would separate the two stages between points 2 and 3.
Best regards, Matteo
On Jul 18, 2025, at 12:19 PM, Pablo Tello Pablo.Tello@arm.com wrote:
Hi Matteo,
It would be great if we could implement this feature in ACL. We have not planed implementing sparse tensors because the resources have been moved somewhere else but we are happy support you in implementing this and provide guidance.
I reviewed your design draft and it looks good. We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Hope this helps
From: Matteo Nicoli <matteo.nicoli001@gmail.com mailto:matteo.nicoli001@gmail.com> Sent: 30 June 2025 17:33 To: Gunes Bayir <Gunes.Bayir@arm.com mailto:Gunes.Bayir@arm.com>; Pablo Tello <Pablo.Tello@arm.com mailto:Pablo.Tello@arm.com> Cc: acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org <acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org> Subject: [Acl-dev] A draft for sparse tensors design Dear Gunes and Pablo,
I noticed in the comments of the issue #1084 https://github.com/ARM-software/ComputeLibrary/issues/1084 that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169 https://github.com/ARM-software/ComputeLibrary/issues/1169 is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo
 _______________________________________________ Acl-dev mailing list -- acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org To unsubscribe send an email to acl-dev-leave@lists.linaro.org mailto:acl-dev-leave@lists.linaro.orgIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Dear Pablo, Dear Gunes,
This weekend I started implementing the data structures for the sparse tensor support. I attached here the patch of my changes. For now, I implemented the classes to create COOTensor instances. As of now, it constructs COOTensor instances starting from tensors with NCHW layout (NHWC not yet implemented). I put the logic to convert the tensor in COOTensor’s constructor, and presented in my design, you can convert a tensor by simply call tensor->to_sparse(sparse_dim). Before proceeding with the implementation of other sparse tensor classes or the inverse function (to_dense), I have a question for you. Although I think it’s very convenient to be able to convert a tensor using a method exposed by the Tensor class itself (to_sparse), it is somewhat at odds with the design of all the other operators in the library, including unary ones, such as Transpose. Is it worth leaving it as it is, or is it better to transform it into a conventional operator (something like ReduceSparsity, or similar), with the canonical three steps validate, configure, and run?
Best regards, Matteo
On 18 Jul 2025, at 2:41 pm, Matteo Nicoli matteo.nicoli001@gmail.com wrote:
Hi Pablo,
We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Okay, I was thinking about this:
Implement the SparseTensor data structures. Support basic (unary) operators like Transpose. Implement kernel support.
The most time-consuming task is definitely point 3, so I would separate the two stages between points 2 and 3.
Best regards, Matteo
On Jul 18, 2025, at 12:19 PM, Pablo Tello Pablo.Tello@arm.com wrote:
Hi Matteo,
It would be great if we could implement this feature in ACL. We have not planed implementing sparse tensors because the resources have been moved somewhere else but we are happy support you in implementing this and provide guidance.
I reviewed your design draft and it looks good. We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Hope this helps
From: Matteo Nicoli <matteo.nicoli001@gmail.com mailto:matteo.nicoli001@gmail.com> Sent: 30 June 2025 17:33 To: Gunes Bayir <Gunes.Bayir@arm.com mailto:Gunes.Bayir@arm.com>; Pablo Tello <Pablo.Tello@arm.com mailto:Pablo.Tello@arm.com> Cc: acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org <acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org> Subject: [Acl-dev] A draft for sparse tensors design Dear Gunes and Pablo,
I noticed in the comments of the issue #1084 <https://github.com/ARM-software/ComputeLibrary/issues/1084 https://github.com/ARM-software/ComputeLibrary/issues/1084> that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169 <https://github.com/ARM-software/ComputeLibrary/issues/1169 https://github.com/ARM-software/ComputeLibrary/issues/1169> is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo
 _______________________________________________ Acl-dev mailing list -- acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org To unsubscribe send an email to acl-dev-leave@lists.linaro.org mailto:acl-dev-leave@lists.linaro.orgIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi Matteo,
Apologies for the delay in the reply.
Before proceeding with the implementation of other sparse tensor classes or the inverse function (to_dense), I have a question for you. Although I think it’s very convenient to be able to convert a tensor using a method exposed by the Tensor class itself (to_sparse), it is somewhat at odds with the design of all the other operators in the library, including unary ones, such as Transpose. Is it worth leaving it as it is, or is it better to transform it into a conventional operator (something like ReduceSparsity, or similar), with the canonical three steps validate, configure, and run? That's interesting, if I understand correctly you propose to have a new operator ReduceSparsity rather than a method tensor->to_sparse(sparse_dim). ?. I think this is a good idea, it may be the best option, specially when we then try to add support for sparse tensors on other operators.
A heads-up: we are about to move our dev branch to github, this means that we will start taking PRs there. Once we complete the move it would be good for you to create an initial PR so that we can start discussing this more in detail.
Thanks for contributing to ACL Pablo
________________________________ From: Matteo Nicoli matteo.nicoli001@gmail.com Sent: 04 August 2025 16:46 To: Pablo Tello Pablo.Tello@arm.com Cc: Gunes Bayir Gunes.Bayir@arm.com; acl-dev@lists.linaro.org acl-dev@lists.linaro.org Subject: [Acl-dev] Re: A draft for sparse tensors design
Dear Pablo, Dear Gunes,
This weekend I started implementing the data structures for the sparse tensor support. I attached here the patch of my changes. For now, I implemented the classes to create COOTensor instances. As of now, it constructs COOTensor instances starting from tensors with NCHW layout (NHWC not yet implemented). I put the logic to convert the tensor in COOTensor’s constructor, and presented in my design, you can convert a tensor by simply call tensor->to_sparse(sparse_dim). Before proceeding with the implementation of other sparse tensor classes or the inverse function (to_dense), I have a question for you. Although I think it’s very convenient to be able to convert a tensor using a method exposed by the Tensor class itself (to_sparse), it is somewhat at odds with the design of all the other operators in the library, including unary ones, such as Transpose. Is it worth leaving it as it is, or is it better to transform it into a conventional operator (something like ReduceSparsity, or similar), with the canonical three steps validate, configure, and run?
Best regards, Matteo
On 18 Jul 2025, at 2:41 pm, Matteo Nicoli matteo.nicoli001@gmail.com wrote:
Hi Pablo,
We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Okay, I was thinking about this:
Implement the SparseTensor data structures. Support basic (unary) operators like Transpose. Implement kernel support.
The most time-consuming task is definitely point 3, so I would separate the two stages between points 2 and 3.
Best regards, Matteo
On Jul 18, 2025, at 12:19 PM, Pablo Tello Pablo.Tello@arm.com wrote:
Hi Matteo,
It would be great if we could implement this feature in ACL. We have not planed implementing sparse tensors because the resources have been moved somewhere else but we are happy support you in implementing this and provide guidance.
I reviewed your design draft and it looks good. We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Hope this helps
From: Matteo Nicoli <matteo.nicoli001@gmail.com mailto:matteo.nicoli001@gmail.com> Sent: 30 June 2025 17:33 To: Gunes Bayir <Gunes.Bayir@arm.com mailto:Gunes.Bayir@arm.com>; Pablo Tello <Pablo.Tello@arm.com mailto:Pablo.Tello@arm.com> Cc: acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org <acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org> Subject: [Acl-dev] A draft for sparse tensors design
Dear Gunes and Pablo,
I noticed in the comments of the issue #1084 <https://github.com/ARM-software/ComputeLibrary/issues/1084 https://github.com/ARM-software/ComputeLibrary/issues/1084> that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169 <https://github.com/ARM-software/ComputeLibrary/issues/1169 https://github.com/ARM-software/ComputeLibrary/issues/1169> is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo
 _______________________________________________ Acl-dev mailing list -- acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org To unsubscribe send an email to acl-dev-leave@lists.linaro.org mailto:acl-dev-leave@lists.linaro.orgIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
_______________________________________________ Acl-dev mailing list -- acl-dev@lists.linaro.org To unsubscribe send an email to acl-dev-leave@lists.linaro.org IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Hi Pablo,
That's interesting, if I understand correctly you propose to have a new operator ReduceSparsity rather than a method tensor->to_sparse(sparse_dim). ?. I think this is a good idea, it may be the best option, specially when we then try to add support for sparse tensors on other operators.
Yes, at the moment I have implemented the method to_sparse both in COOTensor and in CSRTensor. It’s quite similar to Pytorch’s and very usable. However, it seems to me that it would be more consistent with the general approach of ACL to have a new ReduceSparsity operator. However, this issue needs to be evaluated further, because in this case the destination tensor (i.e. the sparse one) cannot be allocated based solely on the information contained in TensorInfo.
A heads-up: we are about to move our dev branch to github, this means that we will start taking PRs there. Once we complete the move it would be good for you to create an initial PR so that we can start discussing this more in detail.
Great news! Once it has actually been moved, will you announce it on the mailing list? Currently I have a merge request open on Gerrit: are you going to automatically move all the merge requests to Github PRs?
I agree that it would be a good thing to create a PR and discuss the details there.
Best regards, Matteo
On Aug 28, 2025, at 12:43 PM, Pablo Tello Pablo.Tello@arm.com wrote:
Hi Matteo,
Apologies for the delay in the reply.
Before proceeding with the implementation of other sparse tensor classes or the inverse function (to_dense), I have a question for you. Although I think it’s very convenient to be able to convert a tensor using a method exposed by the Tensor class itself (to_sparse), it is somewhat at odds with the design of all the other operators in the library, including unary ones, such as Transpose. Is it worth leaving it as it is, or is it better to transform it into a conventional operator (something like ReduceSparsity, or similar), with the canonical three steps validate, configure, and run? That's interesting, if I understand correctly you propose to have a new operator ReduceSparsity rather than a method tensor->to_sparse(sparse_dim). ?. I think this is a good idea, it may be the best option, specially when we then try to add support for sparse tensors on other operators.
A heads-up: we are about to move our dev branch to github, this means that we will start taking PRs there. Once we complete the move it would be good for you to create an initial PR so that we can start discussing this more in detail.
Thanks for contributing to ACL Pablo
From: Matteo Nicoli <matteo.nicoli001@gmail.com mailto:matteo.nicoli001@gmail.com> Sent: 04 August 2025 16:46 To: Pablo Tello <Pablo.Tello@arm.com mailto:Pablo.Tello@arm.com> Cc: Gunes Bayir <Gunes.Bayir@arm.com mailto:Gunes.Bayir@arm.com>; acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org <acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org> Subject: [Acl-dev] Re: A draft for sparse tensors design Dear Pablo, Dear Gunes,
This weekend I started implementing the data structures for the sparse tensor support. I attached here the patch of my changes. For now, I implemented the classes to create COOTensor instances. As of now, it constructs COOTensor instances starting from tensors with NCHW layout (NHWC not yet implemented). I put the logic to convert the tensor in COOTensor’s constructor, and presented in my design, you can convert a tensor by simply call tensor->to_sparse(sparse_dim). Before proceeding with the implementation of other sparse tensor classes or the inverse function (to_dense), I have a question for you. Although I think it’s very convenient to be able to convert a tensor using a method exposed by the Tensor class itself (to_sparse), it is somewhat at odds with the design of all the other operators in the library, including unary ones, such as Transpose. Is it worth leaving it as it is, or is it better to transform it into a conventional operator (something like ReduceSparsity, or similar), with the canonical three steps validate, configure, and run?
Best regards, Matteo
On 18 Jul 2025, at 2:41 pm, Matteo Nicoli <matteo.nicoli001@gmail.com mailto:matteo.nicoli001@gmail.com> wrote:
Hi Pablo,
We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Okay, I was thinking about this:
Implement the SparseTensor data structures. Support basic (unary) operators like Transpose. Implement kernel support.
The most time-consuming task is definitely point 3, so I would separate the two stages between points 2 and 3.
Best regards, Matteo
On Jul 18, 2025, at 12:19 PM, Pablo Tello <Pablo.Tello@arm.com mailto:Pablo.Tello@arm.com> wrote:
Hi Matteo,
It would be great if we could implement this feature in ACL. We have not planed implementing sparse tensors because the resources have been moved somewhere else but we are happy support you in implementing this and provide guidance.
I reviewed your design draft and it looks good. We will have to implement this in two stages: 1) Introduce the sparse tensors support 2) Make new kernels which take advantage of the sparse tensors.
Hope this helps
From: Matteo Nicoli <matteo.nicoli001@gmail.com mailto:matteo.nicoli001@gmail.com mailto:matteo.nicoli001@gmail.com> Sent: 30 June 2025 17:33 To: Gunes Bayir <Gunes.Bayir@arm.com mailto:Gunes.Bayir@arm.com mailto:Gunes.Bayir@arm.com>; Pablo Tello <Pablo.Tello@arm.com mailto:Pablo.Tello@arm.commailto:Pablo.Tello@arm.com> Cc: acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org <acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org> Subject: [Acl-dev] A draft for sparse tensors design
Dear Gunes and Pablo,
I noticed in the comments of the issue #1084 <https://github.com/ARM-software/ComputeLibrary/issues/1084https://github.com/ARM-software/ComputeLibrary/issues/1084> that someone discussed the introduction of sparse tensors a while back. Although the last comment in the thread states, “There is currently no planned work to implement this feature. I’ll discuss it again with the team,” I thought the issue hadn’t been closed, indicating some interest in it. On Saturday, I began thinking on how this feature could be integrated into the codebase. I’ve drafted a design for these classes, which I’d like to share with you.
I understand that this feature may not be a priority for your team at the moment. However, if it doesn’t contradict your roadmap, once issue #1169 <https://github.com/ARM-software/ComputeLibrary/issues/1169 https://github.com/ARM-software/ComputeLibrary/issues/1169> is resolved, I’ll be able to begin implementing it myself on an experimental basis.
Best regards, Matteo
 _______________________________________________ Acl-dev mailing list -- acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org To unsubscribe send an email to acl-dev-leave@lists.linaro.org mailto:acl-dev-leave@lists.linaro.org mailto:acl-dev-leave@lists.linaro.orgIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Acl-dev mailing list -- acl-dev@lists.linaro.org mailto:acl-dev@lists.linaro.org To unsubscribe send an email to acl-dev-leave@lists.linaro.org mailto:acl-dev-leave@lists.linaro.orgIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.