Under unknown conditions, Zen5 chips running rdseed can produce (val=0,CF=1) over 10% of the time (when rdseed is successful). CF=1 indicates success, while val=0 is typically only produced when rdseed fails (CF=0).
This suggests there is a bug which causes rdseed to silently fail.
This was reproduced reliably by launching 2-threads per available core, 1-thread per for hamming on RDSEED, and 1-thread per core collectively eating and hammering on ~90% of memory.
This was observed on more than 1 Zen5 model, so it should be disabled for all of Zen5 until/unless a comprehensive blacklist can be built.
Cc: stable@vger.kernel.org Signed-off-by: Gregory Price gourry@gourry.net --- arch/x86/kernel/cpu/amd.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 5398db4dedb4..1af30518d3e7 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -1037,6 +1037,10 @@ static void init_amd_zen4(struct cpuinfo_x86 *c)
static void init_amd_zen5(struct cpuinfo_x86 *c) { + /* Disable RDSEED on AMD Turin because of an error. */ + clear_cpu_cap(c, X86_FEATURE_RDSEED); + msr_clear_bit(MSR_AMD64_CPUID_FN_7, 18); + pr_emerg("RDSEED is not reliable on this platform; disabling.\n"); }
static void init_amd(struct cpuinfo_x86 *c)
On Fri, Oct 17, 2025 at 10:40:10PM -0400, Gregory Price wrote:
Under unknown conditions, Zen5 chips running rdseed can produce (val=0,CF=1) over 10% of the time (when rdseed is successful). CF=1 indicates success, while val=0 is typically only produced when rdseed fails (CF=0).
This suggests there is a bug which causes rdseed to silently fail.
This was reproduced reliably by launching 2-threads per available core, 1-thread per for hamming on RDSEED, and 1-thread per core collectively eating and hammering on ~90% of memory.
Which version of RDSEED was used? 32-bit perhaps? Can you repro this with the 64-bit version of RDSEED?
This was observed on more than 1 Zen5 model, so it should be disabled for all of Zen5 until/unless a comprehensive blacklist can be built.
As I said the last time, we're working on it. Be patient pls.
Thx.
Hi Borislav,
On Sat, Oct 18, 2025 at 12:03:14PM +0200, Borislav Petkov wrote:
This was observed on more than 1 Zen5 model, so it should be disabled for all of Zen5 until/unless a comprehensive blacklist can be built.
As I said the last time, we're working on it. Be patient pls.
While your team is checking into this, I'd be most interested to know one way or the other whether this affects RDRAND too. Since RDRAND uses the same source as RDSEED for seeding its DRBG, I could imagine it triggering this bug too (in unlikely circumstances), and then generating random looking output that's actually based on a key that has some runs of zeros in it. We'd have a hard time figuring this out from looking at the output (or even triggering it deliberately), but it seems like something that should be knowable by the team doing root cause analysis of the RDSEED bug.
Thanks, Jason
On Sun, Oct 19, 2025 at 04:46:06PM +0200, Jason A. Donenfeld wrote:
While your team is checking into this, I'd be most interested to know one way or the other whether this affects RDRAND too.
No it doesn't, AFAIK. The only one affected is the 32-bit or 16-bit dest operand version of RDSEED. Again, AFAIK.
On Sun, Oct 19, 2025 at 05:00:27PM +0200, Borislav Petkov wrote:
On Sun, Oct 19, 2025 at 04:46:06PM +0200, Jason A. Donenfeld wrote:
While your team is checking into this, I'd be most interested to know one way or the other whether this affects RDRAND too.
No it doesn't, AFAIK. The only one affected is the 32-bit or 16-bit dest operand version of RDSEED. Again, AFAIK.
Oh good. So on 64-bit kernels, the impact to random.c is zilch.
Jason
On Sun 19 Oct 2025 05:03:25 PM , Jason A. Donenfeld wrote:
On Sun, Oct 19, 2025 at 05:00:27PM +0200, Borislav Petkov wrote:
On Sun, Oct 19, 2025 at 04:46:06PM +0200, Jason A. Donenfeld wrote:
While your team is checking into this, I'd be most interested to know one way or the other whether this affects RDRAND too.
No it doesn't, AFAIK. The only one affected is the 32-bit or 16-bit dest operand version of RDSEED. Again, AFAIK.
Oh good. So on 64-bit kernels, the impact to random.c is zilch.
Jason
Although apparently, the patch does break userspace for any distribution building packages with -march=znver4
- Christopher
On Mon, Nov 03, 2025 at 02:22:44AM -0800, Christopher Snowhill wrote:
Although apparently, the patch does break userspace for any distribution building packages with -march=znver4
Care to elaborate?
On Mon 03 Nov 2025 01:03:19 PM , Borislav Petkov wrote:
On Mon, Nov 03, 2025 at 02:22:44AM -0800, Christopher Snowhill wrote:
Although apparently, the patch does break userspace for any distribution building packages with -march=znver4
Care to elaborate?
Sorry for the HTML before, apparently I'm not supposed to try writing replies from my tablet, because it will interpret the quote indenting as formatting and forcibly send HTML mail.
Anyway. A bug report was sent here:
https://lore.kernel.org/lkml/9a27f2e6-4f62-45a6-a527-c09983b8dce4@cachyos.or...
Qt is built with -march=znver4, which automatically enables -mrdseed. This is building rdseed 64 bit, but then the software is also performing kernel feature checks on startup. There is no separate feature flag for 16/32/64 variants.
-- Regards/Gruss, Boris.
On Mon, Nov 03, 2025 at 03:55:53PM -0800, Christopher Snowhill wrote:
https://lore.kernel.org/lkml/9a27f2e6-4f62-45a6-a527-c09983b8dce4@cachyos.or...
tglx already summed up what the options are:
https://lore.kernel.org/all/878qgnw0vt.ffs@tglx
Qt is built with -march=znver4, which automatically enables -mrdseed. This is building rdseed 64 bit, but then the software is also performing kernel feature checks on startup. There is no separate feature flag for 16/32/64 variants.
No, there aren't.
And the problem here is that, AFAICT, Qt is not providing a proper fallback for !RDSEED. Dunno, maybe getrandom(2) or so. It is only a syscall which has been there since forever. Rather, it would simply throw hands in the air.
Soon there will be client microcode fixes too so all should be well.
On Tue, Nov 4, 2025 at 2:21 PM Borislav Petkov bp@alien8.de wrote:
And the problem here is that, AFAICT, Qt is not providing a proper fallback for !RDSEED. Dunno, maybe getrandom(2) or so. It is only a syscall which has been there since forever. Rather, it would simply throw hands in the air.
Qt seems to be kinda wild.
When you use -mcpu=, you generally can then omit cpuid checks. That's the whole idea. But then Qt checks cpuid anyway and compares it to the -mcpu= feature set and aborts early. This mismatch happens in the case Christopher is complaining about when the kernel has masked that out of the cpuid, due to bugs. But I guess if it just wouldn't check the cpuid, it would have worked anyway, modulo the actual cpu bug. But regarding rdseed/rand bugs, there's a workaround for earlier AMD rdrand bugs: https://github.com/qt/qtbase/blob/dev/src/corelib/global/qsimd.cpp#L781 But then it skips that for -mcpu= with `(_compilerCpuFeatures & CpuFeatureRDRND)`. Weird.
Another strange thing, though, is the way this is actually used. As far as I can tell from reading this messy source, QRandomGenerator::SystemGenerator::generate() tries in order:
1. rdseed 2. rdrand 3. getentropy (getrandom) 4. /dev/urandom 5. /dev/random 6. Something ridiculous using mt19937
In addition to rdseed really not being appropriate here, in order to have seeds for option (6), no matter what happens with 1,2,3,4,5, it always stores the first 4 bytes of output from previous calls, just in case at some point it needs to use (6). Goodbye forward secrecy? And does this mt19937 stuff leak? And also, wtf?
This is totally strange. It should just be using getrandom() and falling back to /dev/urandom for old kernels unavailable. Full stop. Actually, src/corelib/global/minimum-linux_p.h suggests 4.11 is required ("We require the following features in Qt (unconditional, no fallback)"), so it could replace basically this entire file with getentropy() for unix and rtlgenrandom for windows.
I dunno, maybe I read this code wrong -- https://github.com/qt/qtbase/blob/dev/src/corelib/global/qrandom.cpp -- you can look at yourself. But this whole thing seems to be muddled and pretty bad.
So I'm slightly unsympathetic.
I'm CC'ing Thiago; he'll maybe have some sort of defense of all this weirdness. But really -- you're doing it wrong. Just use getrandom()!
Jason
The documentation really isn't helping things either.
https://doc.qt.io/qt-6/qrandomgenerator.html
From the intro: "QRandomGenerator::securelySeeded() can be used to create a QRandomGenerator that is securely seeded with QRandomGenerator::system(), meaning that the sequence of numbers it generates cannot be easily predicted. Additionally, QRandomGenerator::global() returns a global instance of QRandomGenerator that Qt will ensure to be securely seeded." And then later, reading about QRandomGenerator::global(), it starts by saying, "Returns a pointer to a shared QRandomGenerator that was seeded using securelySeeded()."
Sounds great, like we should just use QRandomGenerator::global() for everything, right? Wrong. It turns out QRandomGenerator::system() is the one that uses 1,2,3,4,5,(6godforbid) in my email above. QRandomGenerator::global(), on the contrary uses "std::mersenne_twister_engine<quint32,32,624,397,31,0x9908b0df,11,0xffffffff,7,0x9d2c5680,15,0xefc60000,18,1812433253>".
So then you keep reading the documentation and it mentions that ::system() is "to access the system's cryptographically-safe random generator." So okay maybe if you're really up with the lingo, you'll know to use that. But to your average reader, what's the difference between "securely seeded" and "system's cryptographically-safe random number generator"? And even to me, I was left wondering what exactly was securely seeded before I looked at the source. For example, OpenBSD's arc4random securely seeds a chacha20 instance in libc before proceeding. That's a lot different from std::mersenne_twister_engine!
I was looking for uses of ::system() on my laptop so that I could verify the behavior described in my last email dynamically, when I came across this from my favorite music player (author CC'd): https://github.com/strawberrymusicplayer/strawberry/blob/master/src/utilitie...
QString CryptographicRandomString(const int len) { const QString UseCharacters(u"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~"_s); return GetRandomString(len, UseCharacters); } QString GetRandomString(const int len, const QString &UseCharacters) { QString randstr; for (int i = 0; i < len; ++i) { const qint64 index = QRandomGenerator::global()->bounded(0, UseCharacters.length());
Using ::global() for something "cryptographic". I don't blame the author at all! The documentation is confusing as can be.
And this is all on top of the fact that ::system() is pretty mucky, as described in my last email.
Jason
On Tuesday, 4 November 2025 06:28:07 Pacific Standard Time Jason A. Donenfeld wrote:
On Tue, Nov 4, 2025 at 2:21 PM Borislav Petkov bp@alien8.de wrote:
And the problem here is that, AFAICT, Qt is not providing a proper fallback for !RDSEED. Dunno, maybe getrandom(2) or so. It is only a syscall which has been there since forever. Rather, it would simply throw hands in the air.
Qt seems to be kinda wild.
Hello Jason
When you use -mcpu=, you generally can then omit cpuid checks. That's the whole idea. But then Qt checks cpuid anyway and compares it to the -mcpu= feature set and aborts early. This mismatch happens in the case Christopher is complaining about when the kernel has masked that out of the cpuid, due to bugs. But I guess if it just wouldn't check the cpuid, it would have worked anyway, modulo the actual cpu bug. But regarding rdseed/rand bugs, there's a workaround for earlier AMD rdrand bugs: https://github.com/qt/qtbase/blob/dev/src/corelib/global/qsimd.cpp#L781 But then it skips that for -mcpu= with `(_compilerCpuFeatures & CpuFeatureRDRND)`. Weird.
The general theory is that if you ask the compiler to enable a feature, it's because you know you're going to run on a CPU with that particular feature and therefore we can remove the check.
That includes RDRND: checkRdrndWorks() has:
// But if the RDRND feature was statically enabled by the compiler, we // assume that the RNG works. That's because the calls to qRandomCpu() will // be guarded by qCpuHasFeature(RDRND) and that will be a constant true. if (_compilerCpuFeatures & CpuFeatureRDRND) return true;
The code you pointed out above is guarded by this particular piece of code:
if (features & CpuFeatureRDRND && !checkRdrndWorks(features)) features &= ~(CpuFeatureRDRND | CpuFeatureRDSEED);
As you said, we do have some code to print the CPU feature mismatch on load, so as to avoid crashing with SIGILL. But it won't apply for the broken RDRND case, because the side effect of that code is we assume it isn't broken in the first place. That's because we're optimising for the case where it isn't broken, which I find reasonable.
Another strange thing, though, is the way this is actually used. As far as I can tell from reading this messy source, QRandomGenerator::SystemGenerator::generate() tries in order:
- rdseed
- rdrand
- getentropy (getrandom)
- /dev/urandom
- /dev/random
- Something ridiculous using mt19937
#1 and #2 are a runtime decision. If they fail due to lack of entropy or are unavailable, we will use getentropy().
#3 is mutually exclusive with #4 and #5. We enable getentropy() if your glibc has it at compile time, or we use /dev/urandom if it doesn't. There's a marker in the ELF header then indicating we can't run in a kernel without getrandom().
#6 will never be used on Linux. That monstrosity is actually compiled out of existence on Linux, BSDs, and Windows (in spite of mentioning Linux in the source). It's only there as a final fallback for systems I don't really care about and can't test anyway.
In addition to rdseed really not being appropriate here, in order to have seeds for option (6), no matter what happens with 1,2,3,4,5, it always stores the first 4 bytes of output from previous calls, just in case at some point it needs to use (6). Goodbye forward secrecy? And does this mt19937 stuff leak? And also, wtf?
See above.
What do you mean about RDSEED? Should it not be used? Note that QRandomGenerator is often used to seed a PRNG, so it seemed correct to me to use it.
This is totally strange. It should just be using getrandom() and falling back to /dev/urandom for old kernels unavailable. Full stop. Actually, src/corelib/global/minimum-linux_p.h suggests 4.11 is required ("We require the following features in Qt (unconditional, no fallback)"), so it could replace basically this entire file with getentropy() for unix and rtlgenrandom for windows.
When this was originally written, getrandom() wasn't generally available and the glibc wrapper even less so, meaning the code path usually went through the read() syscall. Using RDRAND seemed like a good idea to avoid the transition into kernel mode.
I still think so, even with getrandom(). Though, with the new vDSO support for userspace generation, that bears reevaluation.
There's also the issue of being cross-platform. Because my primary system is Linux, I prefer to have as little differentiation from it as I can get away with, so I can test what other users may see. However, I will not hesitate to write code that is fast only on Linux and let other OSes deal with their own shortcomings (q.v. qstorageinfo_linux.cpp, qnetworkinterface_linux.cpp, support for glibc-hwcaps). In this case, I'm not convinced there's benefit for Linux by bypassing the RDRND check and going straight to getentropy()/ getrandom().
Sounds great, like we should just use QRandomGenerator::global() for everything, right? Wrong. It turns out QRandomGenerator::system() is the one that uses 1,2,3,4,5,(6godforbid) in my email above. QRandomGenerator::global(), on the contrary uses "std::mersenne_twister_engine<quint32,32,624,397,31,0x9908b0df,11,0xffffffff ,7,0x9d2c5680,15,0xefc60000,18,1812433253>".
The separation is because I was convinced, at the time of developing the code, that advocating that people use a system-wide resource like the RDRND or getrandom() entropy for everything was bad advice. So, instead, we create our own per-process PRNG, securely seed it from that shared resource, and then let people use it for their own weird needs. Like creating random strings.
And it's also expected that if you know you need something more than baseline, you'll be well-versed in the lingo to understand the difference between global() and system().
Hi Thiago,
On Tue, Nov 4, 2025 at 7:08 PM Thiago Macieira thiago.macieira@intel.com wrote:
Another strange thing, though, is the way this is actually used. As far as I can tell from reading this messy source, QRandomGenerator::SystemGenerator::generate() tries in order:
- rdseed
- rdrand
- getentropy (getrandom)
- /dev/urandom
- /dev/random
- Something ridiculous using mt19937
#1 and #2 are a runtime decision. If they fail due to lack of entropy or are unavailable, we will use getentropy().
I didn't see that SkipHWRNG thing being set anywhere. That looked like it was internal/testing only. So #1 and #2 will always be tried first. At least I think so, but it's a bit hard to follow.
I think ranking rdrand & rdseed as 1 and 2 above the rest is a senseless decision. I'll elaborate below.
#3 is mutually exclusive with #4 and #5. We enable getentropy() if your glibc has it at compile time, or we use /dev/urandom if it doesn't. There's a marker in the ELF header then indicating we can't run in a kernel without getrandom().
That's good. You can always call getrandom via the syscall if libc doesn't have it, but probably that doesn't matter for you, and what you're doing is sufficient.
#6 will never be used on Linux. That monstrosity is actually compiled out of existence on Linux, BSDs, and Windows (in spite of mentioning Linux in the source). It's only there as a final fallback for systems I don't really care about and can't test anyway.
That's good. Though I see this code in the fallback:
// works on Linux -- all modern libc have getauxval # ifdef AT_RANDOM
Which makes me think it is happening for Linux in some cases? I don't know; this is hard to follow; you know best.
It'd probably be a good idea to just remove this code entirely and abort. If there's no cryptographic source of random numbers, and the user requests it, you can't just return garbage... Or if you're going to rely on AT_RANDOM, look at the (also awful fallback) code I wrote for systemd. But I dunno, just get rid of it...
What do you mean about RDSEED? Should it not be used? Note that QRandomGenerator is often used to seed a PRNG, so it seemed correct to me to use it.
When this was originally written, getrandom() wasn't generally available and the glibc wrapper even less so, meaning the code path usually went through the read() syscall. Using RDRAND seemed like a good idea to avoid the transition into kernel mode.
I still think so, even with getrandom(). Though, with the new vDSO support for userspace generation, that bears reevaluation.
RDRAND and RDSEED are slow! Benchmark filling a buffer or whatever, and you'll find that even with the syscall, getrandom() and /dev/urandom are still faster than RDRAND and RDSEED.
Here are timings on my tiger lake laptop to fill a gigabyte:
getrandom vdso: 1.520942015 seconds getrandom syscall: 2.323843614 seconds /dev/urandom: 2.629186218 seconds rdrand: 79.510470674 seconds rdseed: 242.396616879 seconds
And here are timings to make 25000000 calls for 4 bytes each -- in case you don't believe me about syscall transitions:
getrandom vdso 0.371687883 seconds getrandom syscall: 5.334084969 seconds /dev/urandom: 5.820504847 seconds rdrand: 15.399338418 seconds rdseed: 45.145797233 seconds
Play around yourself. But what's certain is that getrandom() will always be *at least as secure* as rdrand/rdseed, by virtue of combining those with multiple other sources, and you won't find yourself in trouble viz buggy CPUs or whatever. And it's faster too. There's just no reason to use RDRAND/RDSEED in user code like this, especially not in a library like Qt.
There's also the issue of being cross-platform. Because my primary system is Linux, I prefer to have as little differentiation from it as I can get away with, so I can test what other users may see. However, I will not hesitate to write code that is fast only on Linux and let other OSes deal with their own shortcomings (q.v. qstorageinfo_linux.cpp, qnetworkinterface_linux.cpp, support for glibc-hwcaps). In this case, I'm not convinced there's benefit for Linux by bypassing the RDRND check and going straight to getentropy()/ getrandom().
The right thing to do is to call each OS's native RNG functions. On Linux, that's getrandom(), which you can access via getentropy(). On the BSDs that's getentropy(). On Windows, there's a variety of ways in, but I assume Qt with all its compatibility concerns is best off using RtlGenRandom. Don't try to be too clever and use CPU features; the kernel already takes care of abstracting that for you via its RNG. And especially don't be too clever by trying to roll your own RNG or importing some mt19937 madness.
The separation is because I was convinced, at the time of developing the code, that advocating that people use a system-wide resource like the RDRND or getrandom() entropy for everything was bad advice. So, instead, we create our own per-process PRNG, securely seed it from that shared resource, and then let people use it for their own weird needs. Like creating random strings.
And it's also expected that if you know you need something more than baseline, you'll be well-versed in the lingo to understand the difference between global() and system().
I'd recommend that you fix the documentation, and change the function names for Qt7. You have a cryptographic RNG and a deterministic RNG. These both have different legitimate use cases, and should be separated cleanly as such.
For now, you can explain the global() one as giving insecure deterministic random numbers, but is always seeded properly in a way that will always be unique per process. And system() as giving cryptographically secure random numbers. Don't try to call anything involving std::mersenne_twister_engine "secure"; instead say, "uniquely seeded" or something like that, and mention explicitly that it's insecure and deterministic.
Jason
On Tuesday, 4 November 2025 13:56:11 Pacific Standard Time Jason A. Donenfeld wrote:
I didn't see that SkipHWRNG thing being set anywhere. That looked like it was internal/testing only. So #1 and #2 will always be tried first. At least I think so, but it's a bit hard to follow.
It is an internal thing. It's meant for the unit testing only, where we force the generator to generate specific values, so we can test the functions that use it.
#3 is mutually exclusive with #4 and #5. We enable getentropy() if your glibc has it at compile time, or we use /dev/urandom if it doesn't. There's a marker in the ELF header then indicating we can't run in a kernel without getrandom().
That's good. You can always call getrandom via the syscall if libc doesn't have it, but probably that doesn't matter for you, and what you're doing is sufficient.
I know, but I preferred to use getentropy() because it's the same API as OpenBSD and others. It's one fewer option to maintain and shared with other platforms.
#6 will never be used on Linux. That monstrosity is actually compiled out of existence on Linux, BSDs, and Windows (in spite of mentioning Linux in the source). It's only there as a final fallback for systems I don't really care about and can't test anyway.
That's good. Though I see this code in the fallback:
// works on Linux -- all modern libc have getauxval # ifdef AT_RANDOMWhich makes me think it is happening for Linux in some cases? I don't know; this is hard to follow; you know best.
I added that while developing the fallback. But if you scroll up, you'll see:
#elif QT_CONFIG(getentropy) static void fallback_update_seed(unsigned) {} static void fallback_fill(quint32 *, qsizetype) noexcept { // no fallback necessary, getentropy cannot fail under normal circumstances Q_UNREACHABLE(); }
Strictly speaking, if you don't have getentropy(), the fallback will be compiled in, in case someone runs the application is a messed up environment with /dev improperly populated. In practice, that never happens and getentropy() appeared in glibc 2.25, which is now older than the oldest distro we still support.
It'd probably be a good idea to just remove this code entirely and abort. If there's no cryptographic source of random numbers, and the user requests it, you can't just return garbage... Or if you're going to rely on AT_RANDOM, look at the (also awful fallback) code I wrote for systemd. But I dunno, just get rid of it...
For Linux, I agree. Even for the BSDs. And effectively it is (see above).
But I don't want to deal with bug reports for the other operating systems Qt still supports (QNX, VxWorks, INTEGRITY) for which I have no SDK and for which even finding man pages is difficult. I don't want to spend time on them, including that of checking if they always have /dev/urandom. There are people being paid to worry about those. They can deal with them.
RDRAND and RDSEED are slow! Benchmark filling a buffer or whatever, and you'll find that even with the syscall, getrandom() and /dev/urandom are still faster than RDRAND and RDSEED.
Interesting!
I know they're slow. On Intel, I believe they make an uncore request so the SoC generates the random numbers from its entropy cache. But I didn't expect them to be *slower* than the system call.
Here are timings on my tiger lake laptop to fill a gigabyte:
getrandom vdso: 1.520942015 seconds getrandom syscall: 2.323843614 seconds /dev/urandom: 2.629186218 seconds rdrand: 79.510470674 seconds rdseed: 242.396616879 seconds
And here are timings to make 25000000 calls for 4 bytes each -- in case you don't believe me about syscall transitions:
getrandom vdso 0.371687883 seconds getrandom syscall: 5.334084969 seconds /dev/urandom: 5.820504847 seconds rdrand: 15.399338418 seconds rdseed: 45.145797233 seconds
Thanks for providing the 4-byte numbers. We ask for a minimum of 16 to amortise syscall transitions, so the numbers will be better than your 5.3-5.8 seconds.
Play around yourself. But what's certain is that getrandom() will always be *at least as secure* as rdrand/rdseed, by virtue of combining those with multiple other sources, and you won't find yourself in trouble viz buggy CPUs or whatever. And it's faster too. There's just no reason to use RDRAND/RDSEED in user code like this, especially not in a library like Qt.
I'm coming around to your point of view.
The right thing to do is to call each OS's native RNG functions. On Linux, that's getrandom(), which you can access via getentropy(). On the BSDs that's getentropy(). On Windows, there's a variety of ways in, but I assume Qt with all its compatibility concerns is best off using RtlGenRandom. Don't try to be too clever and use CPU features; the kernel already takes care of abstracting that for you via its RNG. And especially don't be too clever by trying to roll your own RNG or importing some mt19937 madness.
Indeed. Linux is *impressively* fast in transitioning to kernel mode and back. Your numbers above are showing getrandom() taking about 214 ns, which is about on par what I'd expect for a system call that does some non-trivial work. Other OSes may be an order of magnitude slower, placing them on the other side of RDRAND (616 ns).
Then I have to ask myself if I care. I've been before in the situation where I just say, "Linux can do it (state of the art), so complain to your OS vendor that yours can't". Especially as it also simplifies my codebase.
I'd recommend that you fix the documentation, and change the function names for Qt7. You have a cryptographic RNG and a deterministic RNG. These both have different legitimate use cases, and should be separated cleanly as such.
QRandomGenerator *can* be used as a deterministic generator, but that's neither global() nor system(). Even though global() uses a DPRNG, it's always seeded from system(), so the user can never control the initial seed and thus should never rely on a particular random sequence.
The question remaining is whether we should use the system call for global() or if we should retain the DPRNG. This is not about performance any more, but about the system-wide impact that could happen if someone decided to fill in a couple of GB of of random data. From your data, that would only take a couple of seconds to achieve.
For now, you can explain the global() one as giving insecure deterministic random numbers, but is always seeded properly in a way that will always be unique per process. And system() as giving cryptographically secure random numbers. Don't try to call anything involving std::mersenne_twister_engine "secure"; instead say, "uniquely seeded" or something like that, and mention explicitly that it's insecure and deterministic.
From everything I could read at the time, the MT was cryptographically-secure so long as it had been cryptographically-securely seeded in the first place. I have a vague memory of reading a paper somewhere that the MT state can be predicted after a few entries, but given that it has a 624*32 = 10368 bit internal state I find that hard to believe.
Anyway, from the original point of the thread:
Christopher Snowhill wrote:
Anyway. A bug report was sent here:
https://lore.kernel.org/lkml/9a27f2e6-4f62-45a6-a527-c09983b8dce4@cachyos.or g/
Qt is built with -march=znver4, which automatically enables -mrdseed. This is building rdseed 64 bit, but then the software is also performing kernel feature checks on startup. There is no separate feature flag for 16/32/64 variants.
Borislav Petkov wrote:
And the problem here is that, AFAICT, Qt is not providing a proper fallback for !RDSEED. Dunno, maybe getrandom(2) or so. It is only a syscall which has been there since forever. Rather, it would simply throw hands in the air.
There is a fallback, but it was disabled for those builds. It was a choice, even if not a conscious choice.
From my point of view as QtCore maintainer, if you pass -mxxx to the compiler (like -msse3, -mavx512vbmi2, etc.), you're telling the compiler and the library that they're free to generate code using that ISA extension without runtime checking and expect it to work. If you want runtime detection, then don't pass it or pass -mno-xxx after the -march. RDRAND and RDSEED are no different, nor AESNI, VAES or SHANI, which the compiler does not currently ever generate. I'm not going to change my opinion on this, even if I remove the code that depended on the particular feature.
I can't change the past. Disabling the instruction now will either generate a SIGABRT on start or a SIGILL when it's used. It's no different than if we detected the VPSHUFBITQMB instruction is broken and decided to turn off AVX512BITALG. I concur with Thomas Gleixner's summary at https://lore.kernel.org/all/878qgnw0vt.ffs@tglx/:
1) New microcode 2) Fix all source code to either use the 64bit variant of RDSEED or check the result for 0 and treat it like RDSEED with CF=0 (fail) or make it check the CPUID bit....
Or 3) recompile the code with the runtime detection enabled.
It's a pity that Qt always uses the 64-bit variant, so it would have worked just fine.
Hi Thiago,
On Tue, Nov 04, 2025 at 03:50:37PM -0800, Thiago Macieira wrote:
Strictly speaking, if you don't have getentropy(), the fallback will be compiled in, in case someone runs the application is a messed up environment with /dev improperly populated. In practice, that never happens and getentropy() appeared in glibc 2.25, which is now older than the oldest distro we still support.
Great, so I suppose you can entirely remove /dev/[u]random support then.
But I don't want to deal with bug reports for the other operating systems Qt still supports (QNX, VxWorks, INTEGRITY) for which I have no SDK and for which even finding man pages is difficult. I don't want to spend time on them, including that of checking if they always have /dev/urandom. There are people being paid to worry about those. They can deal with them.
Ahhh. It'd be nice to gate this stuff off carefully, and maybe use a real hash function too.
Indeed. Linux is *impressively* fast in transitioning to kernel mode and back. Your numbers above are showing getrandom() taking about 214 ns, which is about on par what I'd expect for a system call that does some non-trivial work. Other OSes may be an order of magnitude slower, placing them on the other side of RDRAND (616 ns).
Then I have to ask myself if I care. I've been before in the situation where I just say, "Linux can do it (state of the art), so complain to your OS vendor that yours can't". Especially as it also simplifies my codebase.
Well, if you want performance consistency, use arc4random() on the BSDs, and you'll avoid syscalls. Same for RtlGenRandom on Windows. These will all have similar performance as vDSO getrandom() on Linux, because they live in userspace. Or use the getentropy() syscall on the BSDs and trust that it's still probably faster than RDRAND, and certainly faster than RDSEED.
QRandomGenerator *can* be used as a deterministic generator, but that's neither global() nor system(). Even though global() uses a DPRNG, it's always seeded from system(), so the user can never control the initial seed and thus should never rely on a particular random sequence.
The question remaining is whether we should use the system call for global() or if we should retain the DPRNG. This is not about performance any more, but about the system-wide impact that could happen if someone decided to fill in a couple of GB of of random data. From your data, that would only take a couple of seconds to achieve.
Oh yea, good question. Well, with every major OS now having a mechanism to skip syscalls for random numbers, I guess you could indeed just alias global() to system() and call it a day. Then users really cannot shoot themselves in the foot. That would be simpler too. Seems like the best option.
From everything I could read at the time, the MT was cryptographically-secure so long as it had been cryptographically-securely seeded in the first place. I have a vague memory of reading a paper somewhere that the MT state can be predicted after a few entries, but given that it has a 624*32 = 10368 bit internal state I find that hard to believe.
I suppose it's linear in F2.
1) New microcode 2) Fix all source code to either use the 64bit variant of RDSEED or check the result for 0 and treat it like RDSEED with CF=0 (fail) or make it check the CPUID bit....Or 3) recompile the code with the runtime detection enabled.
It's a pity that Qt always uses the 64-bit variant, so it would have worked just fine.
4) Fix Qt to use getrandom().
Jason
On Tuesday, 4 November 2025 17:58:36 Pacific Standard Time Jason A. Donenfeld wrote:
Hi Thiago,
On Tue, Nov 04, 2025 at 03:50:37PM -0800, Thiago Macieira wrote:
Strictly speaking, if you don't have getentropy(), the fallback will be compiled in, in case someone runs the application is a messed up environment with /dev improperly populated. In practice, that never happens and getentropy() appeared in glibc 2.25, which is now older than the oldest distro we still support.
Great, so I suppose you can entirely remove /dev/[u]random support then.
It's already compiled out if your glibc has 2.25. Likewise, it gets compiled out for the BSDs (including macOS). That has been the case since the inception.
The /dev/[u]random code needs to remain for QNX and other OSes. https://www.qnx.com/developers/docs/8.0/search.html?searchQuery=getentropy
Indeed. Linux is *impressively* fast in transitioning to kernel mode and back. Your numbers above are showing getrandom() taking about 214 ns, which is about on par what I'd expect for a system call that does some non-trivial work. Other OSes may be an order of magnitude slower, placing them on the other side of RDRAND (616 ns).
Then I have to ask myself if I care. I've been before in the situation where I just say, "Linux can do it (state of the art), so complain to your OS vendor that yours can't". Especially as it also simplifies my codebase.
Well, if you want performance consistency, use arc4random() on the BSDs, and you'll avoid syscalls. Same for RtlGenRandom on Windows. These will all have similar performance as vDSO getrandom() on Linux, because they live in userspace. Or use the getentropy() syscall on the BSDs and trust that it's still probably faster than RDRAND, and certainly faster than RDSEED.
Thanks for the info.
QRandomGenerator *can* be used as a deterministic generator, but that's neither global() nor system(). Even though global() uses a DPRNG, it's always seeded from system(), so the user can never control the initial seed and thus should never rely on a particular random sequence.
The question remaining is whether we should use the system call for global() or if we should retain the DPRNG. This is not about performance any more, but about the system-wide impact that could happen if someone decided to fill in a couple of GB of of random data. From your data, that would only take a couple of seconds to achieve.
Oh yea, good question. Well, with every major OS now having a mechanism to skip syscalls for random numbers, I guess you could indeed just alias global() to system() and call it a day. Then users really cannot shoot themselves in the foot. That would be simpler too. Seems like the best option.
Indeed.
But consider people who haven't upgraded Linux (yes, we get people asking to keep everything intact in their system, but upgrade Qt only, then complain when our dependency minimums change). How much of an impact would they have?
1) New microcode 2) Fix all source code to either use the 64bit variant of RDSEED or check the result for 0 and treat it like RDSEED with CF=0 (fail) or make it check the CPUID bit....Or 3) recompile the code with the runtime detection enabled.
It's a pity that Qt always uses the 64-bit variant, so it would have worked just fine.
- Fix Qt to use getrandom().
That's recompiling Qt anyway. I can't change existing deployments and I can't affect much of past, stable releases.
On Wed, Nov 05, 2025 at 08:41:01AM -0800, Thiago Macieira wrote:
Oh yea, good question. Well, with every major OS now having a mechanism to skip syscalls for random numbers, I guess you could indeed just alias global() to system() and call it a day. Then users really cannot shoot themselves in the foot. That would be simpler too. Seems like the best option.
Indeed.
But consider people who haven't upgraded Linux (yes, we get people asking to keep everything intact in their system, but upgrade Qt only, then complain when our dependency minimums change). How much of an impact would they have?
I suppose you could benchmark it and see if it matters. The syscall is obviously slower than the megafast vDSO code, so it will probably also be a bit slower than the MT code. But I suspect for most use cases maybe it doesn't matter that much? It's worth a try and seeing if anybody complains.
Jason
On Friday, 7 November 2025 11:13:28 Pacific Standard Time Jason A. Donenfeld wrote:
But consider people who haven't upgraded Linux (yes, we get people asking to keep everything intact in their system, but upgrade Qt only, then complain when our dependency minimums change). How much of an impact would they have?
I suppose you could benchmark it and see if it matters. The syscall is obviously slower than the megafast vDSO code, so it will probably also be a bit slower than the MT code. But I suspect for most use cases maybe it doesn't matter that much? It's worth a try and seeing if anybody complains.
I'm not asking about the performance of generating new random numbers in this process.
I am asking about the system-wide impact that draining the entropy source would have. Is that a bad thing?
I suspect the answer is "no" because it's the same as /dev/urandom anyway.
Hi Thiago,
On Fri, Nov 07, 2025 at 11:55:35AM -0800, Thiago Macieira wrote:
I'm not asking about the performance of generating new random numbers in this process.
I am asking about the system-wide impact that draining the entropy source would have. Is that a bad thing?
I suspect the answer is "no" because it's the same as /dev/urandom anyway.
Oh. "Entropy source draining" is not a real thing. There used to be bizarre behavior related to /dev/random (not urandom), but this has been gone for ages. And even the non-getrandom Linux fallback code uses /dev/urandom before /dev/random. So not even on old kernels is this an issue. You can keep generating random numbers forever without worrying about running out of juice or irritating other processes.
Jason
On Friday, 7 November 2025 15:07:13 Pacific Standard Time Jason A. Donenfeld wrote:
Oh. "Entropy source draining" is not a real thing. There used to be bizarre behavior related to /dev/random (not urandom), but this has been gone for ages. And even the non-getrandom Linux fallback code uses /dev/urandom before /dev/random. So not even on old kernels is this an issue. You can keep generating random numbers forever without worrying about running out of juice or irritating other processes.
Thank you. This probably seals the deal. I'll prepare a patch removing the direct use of the hardware instructions in the coming days.
On Fri, Nov 07, 2025 at 03:11:51PM -0800, Thiago Macieira wrote:
On Friday, 7 November 2025 15:07:13 Pacific Standard Time Jason A. Donenfeld wrote:
Oh. "Entropy source draining" is not a real thing. There used to be bizarre behavior related to /dev/random (not urandom), but this has been gone for ages. And even the non-getrandom Linux fallback code uses /dev/urandom before /dev/random. So not even on old kernels is this an issue. You can keep generating random numbers forever without worrying about running out of juice or irritating other processes.
Thank you. This probably seals the deal. I'll prepare a patch removing the direct use of the hardware instructions in the coming days.
Cool. I've got an account on the Qt Gerrit (I think, anyway; it's been some years), in case it's useful to CC me.
Jason
linux-stable-mirror@lists.linaro.org