Hi Thiago,
On Tue, Nov 04, 2025 at 03:50:37PM -0800, Thiago Macieira wrote:
Strictly speaking, if you don't have getentropy(), the fallback will be compiled in, in case someone runs the application is a messed up environment with /dev improperly populated. In practice, that never happens and getentropy() appeared in glibc 2.25, which is now older than the oldest distro we still support.
Great, so I suppose you can entirely remove /dev/[u]random support then.
But I don't want to deal with bug reports for the other operating systems Qt still supports (QNX, VxWorks, INTEGRITY) for which I have no SDK and for which even finding man pages is difficult. I don't want to spend time on them, including that of checking if they always have /dev/urandom. There are people being paid to worry about those. They can deal with them.
Ahhh. It'd be nice to gate this stuff off carefully, and maybe use a real hash function too.
Indeed. Linux is *impressively* fast in transitioning to kernel mode and back. Your numbers above are showing getrandom() taking about 214 ns, which is about on par what I'd expect for a system call that does some non-trivial work. Other OSes may be an order of magnitude slower, placing them on the other side of RDRAND (616 ns).
Then I have to ask myself if I care. I've been before in the situation where I just say, "Linux can do it (state of the art), so complain to your OS vendor that yours can't". Especially as it also simplifies my codebase.
Well, if you want performance consistency, use arc4random() on the BSDs, and you'll avoid syscalls. Same for RtlGenRandom on Windows. These will all have similar performance as vDSO getrandom() on Linux, because they live in userspace. Or use the getentropy() syscall on the BSDs and trust that it's still probably faster than RDRAND, and certainly faster than RDSEED.
QRandomGenerator *can* be used as a deterministic generator, but that's neither global() nor system(). Even though global() uses a DPRNG, it's always seeded from system(), so the user can never control the initial seed and thus should never rely on a particular random sequence.
The question remaining is whether we should use the system call for global() or if we should retain the DPRNG. This is not about performance any more, but about the system-wide impact that could happen if someone decided to fill in a couple of GB of of random data. From your data, that would only take a couple of seconds to achieve.
Oh yea, good question. Well, with every major OS now having a mechanism to skip syscalls for random numbers, I guess you could indeed just alias global() to system() and call it a day. Then users really cannot shoot themselves in the foot. That would be simpler too. Seems like the best option.
From everything I could read at the time, the MT was cryptographically-secure so long as it had been cryptographically-securely seeded in the first place. I have a vague memory of reading a paper somewhere that the MT state can be predicted after a few entries, but given that it has a 624*32 = 10368 bit internal state I find that hard to believe.
I suppose it's linear in F2.
1) New microcode 2) Fix all source code to either use the 64bit variant of RDSEED or check the result for 0 and treat it like RDSEED with CF=0 (fail) or make it check the CPUID bit....Or 3) recompile the code with the runtime detection enabled.
It's a pity that Qt always uses the 64-bit variant, so it would have worked just fine.
4) Fix Qt to use getrandom().
Jason