On Tuesday, 4 November 2025 13:56:11 Pacific Standard Time Jason A. Donenfeld wrote:
I didn't see that SkipHWRNG thing being set anywhere. That looked like it was internal/testing only. So #1 and #2 will always be tried first. At least I think so, but it's a bit hard to follow.
It is an internal thing. It's meant for the unit testing only, where we force the generator to generate specific values, so we can test the functions that use it.
#3 is mutually exclusive with #4 and #5. We enable getentropy() if your glibc has it at compile time, or we use /dev/urandom if it doesn't. There's a marker in the ELF header then indicating we can't run in a kernel without getrandom().
That's good. You can always call getrandom via the syscall if libc doesn't have it, but probably that doesn't matter for you, and what you're doing is sufficient.
I know, but I preferred to use getentropy() because it's the same API as OpenBSD and others. It's one fewer option to maintain and shared with other platforms.
#6 will never be used on Linux. That monstrosity is actually compiled out of existence on Linux, BSDs, and Windows (in spite of mentioning Linux in the source). It's only there as a final fallback for systems I don't really care about and can't test anyway.
That's good. Though I see this code in the fallback:
// works on Linux -- all modern libc have getauxval # ifdef AT_RANDOMWhich makes me think it is happening for Linux in some cases? I don't know; this is hard to follow; you know best.
I added that while developing the fallback. But if you scroll up, you'll see:
#elif QT_CONFIG(getentropy) static void fallback_update_seed(unsigned) {} static void fallback_fill(quint32 *, qsizetype) noexcept { // no fallback necessary, getentropy cannot fail under normal circumstances Q_UNREACHABLE(); }
Strictly speaking, if you don't have getentropy(), the fallback will be compiled in, in case someone runs the application is a messed up environment with /dev improperly populated. In practice, that never happens and getentropy() appeared in glibc 2.25, which is now older than the oldest distro we still support.
It'd probably be a good idea to just remove this code entirely and abort. If there's no cryptographic source of random numbers, and the user requests it, you can't just return garbage... Or if you're going to rely on AT_RANDOM, look at the (also awful fallback) code I wrote for systemd. But I dunno, just get rid of it...
For Linux, I agree. Even for the BSDs. And effectively it is (see above).
But I don't want to deal with bug reports for the other operating systems Qt still supports (QNX, VxWorks, INTEGRITY) for which I have no SDK and for which even finding man pages is difficult. I don't want to spend time on them, including that of checking if they always have /dev/urandom. There are people being paid to worry about those. They can deal with them.
RDRAND and RDSEED are slow! Benchmark filling a buffer or whatever, and you'll find that even with the syscall, getrandom() and /dev/urandom are still faster than RDRAND and RDSEED.
Interesting!
I know they're slow. On Intel, I believe they make an uncore request so the SoC generates the random numbers from its entropy cache. But I didn't expect them to be *slower* than the system call.
Here are timings on my tiger lake laptop to fill a gigabyte:
getrandom vdso: 1.520942015 seconds getrandom syscall: 2.323843614 seconds /dev/urandom: 2.629186218 seconds rdrand: 79.510470674 seconds rdseed: 242.396616879 seconds
And here are timings to make 25000000 calls for 4 bytes each -- in case you don't believe me about syscall transitions:
getrandom vdso 0.371687883 seconds getrandom syscall: 5.334084969 seconds /dev/urandom: 5.820504847 seconds rdrand: 15.399338418 seconds rdseed: 45.145797233 seconds
Thanks for providing the 4-byte numbers. We ask for a minimum of 16 to amortise syscall transitions, so the numbers will be better than your 5.3-5.8 seconds.
Play around yourself. But what's certain is that getrandom() will always be *at least as secure* as rdrand/rdseed, by virtue of combining those with multiple other sources, and you won't find yourself in trouble viz buggy CPUs or whatever. And it's faster too. There's just no reason to use RDRAND/RDSEED in user code like this, especially not in a library like Qt.
I'm coming around to your point of view.
The right thing to do is to call each OS's native RNG functions. On Linux, that's getrandom(), which you can access via getentropy(). On the BSDs that's getentropy(). On Windows, there's a variety of ways in, but I assume Qt with all its compatibility concerns is best off using RtlGenRandom. Don't try to be too clever and use CPU features; the kernel already takes care of abstracting that for you via its RNG. And especially don't be too clever by trying to roll your own RNG or importing some mt19937 madness.
Indeed. Linux is *impressively* fast in transitioning to kernel mode and back. Your numbers above are showing getrandom() taking about 214 ns, which is about on par what I'd expect for a system call that does some non-trivial work. Other OSes may be an order of magnitude slower, placing them on the other side of RDRAND (616 ns).
Then I have to ask myself if I care. I've been before in the situation where I just say, "Linux can do it (state of the art), so complain to your OS vendor that yours can't". Especially as it also simplifies my codebase.
I'd recommend that you fix the documentation, and change the function names for Qt7. You have a cryptographic RNG and a deterministic RNG. These both have different legitimate use cases, and should be separated cleanly as such.
QRandomGenerator *can* be used as a deterministic generator, but that's neither global() nor system(). Even though global() uses a DPRNG, it's always seeded from system(), so the user can never control the initial seed and thus should never rely on a particular random sequence.
The question remaining is whether we should use the system call for global() or if we should retain the DPRNG. This is not about performance any more, but about the system-wide impact that could happen if someone decided to fill in a couple of GB of of random data. From your data, that would only take a couple of seconds to achieve.
For now, you can explain the global() one as giving insecure deterministic random numbers, but is always seeded properly in a way that will always be unique per process. And system() as giving cryptographically secure random numbers. Don't try to call anything involving std::mersenne_twister_engine "secure"; instead say, "uniquely seeded" or something like that, and mention explicitly that it's insecure and deterministic.
From everything I could read at the time, the MT was cryptographically-secure so long as it had been cryptographically-securely seeded in the first place. I have a vague memory of reading a paper somewhere that the MT state can be predicted after a few entries, but given that it has a 624*32 = 10368 bit internal state I find that hard to believe.
Anyway, from the original point of the thread:
Christopher Snowhill wrote:
Anyway. A bug report was sent here:
https://lore.kernel.org/lkml/9a27f2e6-4f62-45a6-a527-c09983b8dce4@cachyos.or g/
Qt is built with -march=znver4, which automatically enables -mrdseed. This is building rdseed 64 bit, but then the software is also performing kernel feature checks on startup. There is no separate feature flag for 16/32/64 variants.
Borislav Petkov wrote:
And the problem here is that, AFAICT, Qt is not providing a proper fallback for !RDSEED. Dunno, maybe getrandom(2) or so. It is only a syscall which has been there since forever. Rather, it would simply throw hands in the air.
There is a fallback, but it was disabled for those builds. It was a choice, even if not a conscious choice.
From my point of view as QtCore maintainer, if you pass -mxxx to the compiler (like -msse3, -mavx512vbmi2, etc.), you're telling the compiler and the library that they're free to generate code using that ISA extension without runtime checking and expect it to work. If you want runtime detection, then don't pass it or pass -mno-xxx after the -march. RDRAND and RDSEED are no different, nor AESNI, VAES or SHANI, which the compiler does not currently ever generate. I'm not going to change my opinion on this, even if I remove the code that depended on the particular feature.
I can't change the past. Disabling the instruction now will either generate a SIGABRT on start or a SIGILL when it's used. It's no different than if we detected the VPSHUFBITQMB instruction is broken and decided to turn off AVX512BITALG. I concur with Thomas Gleixner's summary at https://lore.kernel.org/all/878qgnw0vt.ffs@tglx/:
1) New microcode 2) Fix all source code to either use the 64bit variant of RDSEED or check the result for 0 and treat it like RDSEED with CF=0 (fail) or make it check the CPUID bit....
Or 3) recompile the code with the runtime detection enabled.
It's a pity that Qt always uses the 64-bit variant, so it would have worked just fine.