Hello folks,
I'm pleased to announce the release of AUTOSEL, a complete rewrite of the stable kernel patch selection tool that Julia Lawall and I presented back in 2018[1]. Unlike the previous version that relied on word statistics and older neural network techniques, AUTOSEL leverages modern large language models and embedding technology to provide significantly more accurate recommendations.
## What is AUTOSEL?
AUTOSEL automatically analyzes Linux kernel commits to determine whether they should be backported to stable kernel trees. It examines commit messages, code changes, and historical backporting patterns to make intelligent recommendations.
This is a complete rewrite of the original tool[1], with several major improvements:
1. Uses large language models (Claude, OpenAI, NVIDIA models) for semantic understanding 2. Implements embeddings-based similar commit retrieval for better context 3. Provides detailed explanations for each recommendation 4. Supports batch processing for efficient analysis of multiple commits
## Key Features
- Support for multiple LLM providers (Claude, OpenAI, NVIDIA) - Self-contained embeddings using Candle - Optional CUDA acceleration for faster analysis - Detailed explanations of backporting decisions - Extensive test coverage and validation
## Getting Started
``` git clone https://git.sr.ht/~sashal/autosel cd autosel cargo build --release ```
To analyze a specific commit: ``` ./target/release/autosel --kernel-repo ~/linux --models claude --commit <SHA> ```
For more information, see the README.md file in the repository.
[1] https://lwn.net/Articles/764647/
On Mon, 5 May 2025 14:11:20 -0400 Sasha Levin wrote:
- Detailed explanations of backporting decisions
Are those available publicly or just to the person running the tool? I was scratching my hard quite a bit on the latest batch.
- Extensive test coverage and validation
Would be great to hear more. My very subjective feeling is that the last batch of AUTOSEL is much worse than the previous. Easily 50% of false positives.
On Tue, May 06, 2025 at 07:21:59AM -0700, Jakub Kicinski wrote:
On Mon, 5 May 2025 14:11:20 -0400 Sasha Levin wrote:
- Detailed explanations of backporting decisions
Are those available publicly or just to the person running the tool? I was scratching my hard quite a bit on the latest batch.
Yup, it presents it to the person running the tool. In theory you can always go back and re-run whatever commit you'd like with the same query and get a very similar explanation, so I didn't consider storing the results.
- Extensive test coverage and validation
Would be great to hear more. My very subjective feeling is that the last batch of AUTOSEL is much worse than the previous. Easily 50% of false positives.
"last batch" as in the big one I've sent out on Monday, or the small one I sent on Tuesday?
On Wed, 7 May 2025 11:06:55 -0400 Sasha Levin wrote:
On Tue, May 06, 2025 at 07:21:59AM -0700, Jakub Kicinski wrote:
On Mon, 5 May 2025 14:11:20 -0400 Sasha Levin wrote:
- Detailed explanations of backporting decisions
Are those available publicly or just to the person running the tool? I was scratching my hard quite a bit on the latest batch.
Yup, it presents it to the person running the tool. In theory you can always go back and re-run whatever commit you'd like with the same query and get a very similar explanation, so I didn't consider storing the results.
Injecting the explanation under the --- separator in the AUTOSEL email would be ideal, but not sure how hard that is to arrange.
- Extensive test coverage and validation
Would be great to hear more. My very subjective feeling is that the last batch of AUTOSEL is much worse than the previous. Easily 50% of false positives.
"last batch" as in the big one I've sent out on Monday, or the small one I sent on Tuesday?
The big one on Monday.
linux-stable-mirror@lists.linaro.org