Hi Michal,
On 8/28/25 15:38, Michal Koutný wrote:
On Wed, Aug 27, 2025 at 12:27:08AM +0100, Djalal Harouni tixxdz@gmail.com wrote:
It solves the case perfectly, you detect something you fail the security hook return -EPERM and optionally freeze the cgroup, snapshot the runtime state.
So -EPERM is the right way to cut off such tasks.
Indeed. A process can retry x y z paths, at some point we just want to stop the process or container before snapshot.
Oh I thought the attached example is an obvious one, customers want to restrict bpf() usage per cgroup specific container/pod, so when we detect bpf() that's not per allowed cgroup we fail it and freeze it.
Take this and build on top, detect bash/shell exec or any other new dropped binaries, fail and freeze the exec early at linux_bprm object checks.
Or if you want to do some followup analysis, the process can be killed and coredump'd (at least seccomp allows this, it'd be good to have such a possibility with LSMs if there isn't (I'm not that familiar)). Freezing the groups sounds like a way to DoS the system (not only because of hanging the faulty process itself but possibly spreading via IPC dependencies to unrelated processes).
Well mis-using things is possible, but here nothing is new.
Pausing a container, freezing the cgroup, or criu have been there for years, no new features here ;-)
Also why couldn't all these tools execute the cgroup actions themselves through traditional userspace API?
- Freezing at BPF is obviously better, less race since you don't need access to the corresponding cgroup fs and namespace. Not all tools run as supervisor/container manager.
Less race or more race -- I know the race window size may vary but strictly speaking , there is a race or isn't (depends on having proper synchronization or not). (And when intentionally misbehaving processes are considered even tiny window is potential risk.)
- The bpf_send_signal in some cases is not enough, what if you race with a task clone as an example? however freezing the cgroup hierarchy or the one above is a catch all...
Yeah, this might be part that I don't internalize well. If you're running the hook in particular task's process context, it cannot do clone at the same time. If they are independent tasks, there's no ordering, so there's always possibility of the race (so why not embrace it and do whatever is possible with userspace monitoring audit log or similar and respond based on that).
The complexity to do that from userspace for an eBPF security tool that is not running as a supervisor in other namespaces is high. Basically the race window is reduced... we trigger in the task context that we want to freeze and set the freeze jobctl bit of the other tasks.
The extra offending syscalls by that cgroup tasks are reduced.
The feature is supposed to be used by sleepable BPF programs, I don't think we need extra checks here?
Good.
It could be that this BPF code runs in a process that is under pod-x/container-y/cgroup-z/ and maybe you want to freeze "cgroup-z" or "container-y" and so on... or in case of delegated hierarchies, freezing the parent is a catch all.
OK, this would be good. Could it also be pod-x/container-y/cgroup-z2?
Yes, basically the cgroup of the task, or any parent by fetching its cgroup.
BPF is already the core interface of cgroup device controller.
Thanks!
I acknowledge that sooner or later some kind of access to cgroup through BPF will be added, I'd prefer if it was done in a generic way (so that it doesn't become cgroup's problem but someone else's e.g. VFS's or kernfs's ;-)). I can even imagine some usefulness of helpers for selected specific cgroup (core) operations (which is the direction brought up in the other discussion), I just don't think it solves the problem as you present it.
HTH, Michal