Hi Thiago,
On Tue, Nov 4, 2025 at 7:08 PM Thiago Macieira thiago.macieira@intel.com wrote:
Another strange thing, though, is the way this is actually used. As far as I can tell from reading this messy source, QRandomGenerator::SystemGenerator::generate() tries in order:
- rdseed
- rdrand
- getentropy (getrandom)
- /dev/urandom
- /dev/random
- Something ridiculous using mt19937
#1 and #2 are a runtime decision. If they fail due to lack of entropy or are unavailable, we will use getentropy().
I didn't see that SkipHWRNG thing being set anywhere. That looked like it was internal/testing only. So #1 and #2 will always be tried first. At least I think so, but it's a bit hard to follow.
I think ranking rdrand & rdseed as 1 and 2 above the rest is a senseless decision. I'll elaborate below.
#3 is mutually exclusive with #4 and #5. We enable getentropy() if your glibc has it at compile time, or we use /dev/urandom if it doesn't. There's a marker in the ELF header then indicating we can't run in a kernel without getrandom().
That's good. You can always call getrandom via the syscall if libc doesn't have it, but probably that doesn't matter for you, and what you're doing is sufficient.
#6 will never be used on Linux. That monstrosity is actually compiled out of existence on Linux, BSDs, and Windows (in spite of mentioning Linux in the source). It's only there as a final fallback for systems I don't really care about and can't test anyway.
That's good. Though I see this code in the fallback:
// works on Linux -- all modern libc have getauxval # ifdef AT_RANDOM
Which makes me think it is happening for Linux in some cases? I don't know; this is hard to follow; you know best.
It'd probably be a good idea to just remove this code entirely and abort. If there's no cryptographic source of random numbers, and the user requests it, you can't just return garbage... Or if you're going to rely on AT_RANDOM, look at the (also awful fallback) code I wrote for systemd. But I dunno, just get rid of it...
What do you mean about RDSEED? Should it not be used? Note that QRandomGenerator is often used to seed a PRNG, so it seemed correct to me to use it.
When this was originally written, getrandom() wasn't generally available and the glibc wrapper even less so, meaning the code path usually went through the read() syscall. Using RDRAND seemed like a good idea to avoid the transition into kernel mode.
I still think so, even with getrandom(). Though, with the new vDSO support for userspace generation, that bears reevaluation.
RDRAND and RDSEED are slow! Benchmark filling a buffer or whatever, and you'll find that even with the syscall, getrandom() and /dev/urandom are still faster than RDRAND and RDSEED.
Here are timings on my tiger lake laptop to fill a gigabyte:
getrandom vdso: 1.520942015 seconds getrandom syscall: 2.323843614 seconds /dev/urandom: 2.629186218 seconds rdrand: 79.510470674 seconds rdseed: 242.396616879 seconds
And here are timings to make 25000000 calls for 4 bytes each -- in case you don't believe me about syscall transitions:
getrandom vdso 0.371687883 seconds getrandom syscall: 5.334084969 seconds /dev/urandom: 5.820504847 seconds rdrand: 15.399338418 seconds rdseed: 45.145797233 seconds
Play around yourself. But what's certain is that getrandom() will always be *at least as secure* as rdrand/rdseed, by virtue of combining those with multiple other sources, and you won't find yourself in trouble viz buggy CPUs or whatever. And it's faster too. There's just no reason to use RDRAND/RDSEED in user code like this, especially not in a library like Qt.
There's also the issue of being cross-platform. Because my primary system is Linux, I prefer to have as little differentiation from it as I can get away with, so I can test what other users may see. However, I will not hesitate to write code that is fast only on Linux and let other OSes deal with their own shortcomings (q.v. qstorageinfo_linux.cpp, qnetworkinterface_linux.cpp, support for glibc-hwcaps). In this case, I'm not convinced there's benefit for Linux by bypassing the RDRND check and going straight to getentropy()/ getrandom().
The right thing to do is to call each OS's native RNG functions. On Linux, that's getrandom(), which you can access via getentropy(). On the BSDs that's getentropy(). On Windows, there's a variety of ways in, but I assume Qt with all its compatibility concerns is best off using RtlGenRandom. Don't try to be too clever and use CPU features; the kernel already takes care of abstracting that for you via its RNG. And especially don't be too clever by trying to roll your own RNG or importing some mt19937 madness.
The separation is because I was convinced, at the time of developing the code, that advocating that people use a system-wide resource like the RDRND or getrandom() entropy for everything was bad advice. So, instead, we create our own per-process PRNG, securely seed it from that shared resource, and then let people use it for their own weird needs. Like creating random strings.
And it's also expected that if you know you need something more than baseline, you'll be well-versed in the lingo to understand the difference between global() and system().
I'd recommend that you fix the documentation, and change the function names for Qt7. You have a cryptographic RNG and a deterministic RNG. These both have different legitimate use cases, and should be separated cleanly as such.
For now, you can explain the global() one as giving insecure deterministic random numbers, but is always seeded properly in a way that will always be unique per process. And system() as giving cryptographically secure random numbers. Don't try to call anything involving std::mersenne_twister_engine "secure"; instead say, "uniquely seeded" or something like that, and mention explicitly that it's insecure and deterministic.
Jason