Linus Torvalds Slams Unseeded Randomness in Linux Kernel
- Editorial Team

- Feb 24
- 4 min read

Linux creator Linus Torvalds has ignited a robust discussion within the kernel community by sharply criticizing the use of unseeded random number generation in parts of the Linux kernel, calling it both insecure and “just wrong.” His comments, made on the Linux kernel mailing list (LKML), underscore an ongoing debate over the quality and security of entropy sources — particularly where cryptographic operations or security-sensitive features rely on randomness.
Random number generation is a cornerstone of modern computing: everything from encryption keys and security tokens to address space layout randomization (ASLR) depends on good entropy. But unlike well-seeded pseudorandom number generators (PRNGs) used in user-level applications, kernel code sometimes resorts to simpler or poorly seeded methods when true randomness isn’t readily available early in the boot process. Torvalds’ critique highlights how such practices can undermine security guarantees. (kernel.org)
What Torvalds Has Objected To
At the heart of the dispute is what happens when kernel code uses randomness sources before the system has gathered sufficient entropy — the unpredictable data collected from hardware sources like jitter, interrupts, and device timing. In particular, Torvalds took aim at code paths that use get_random_int() or similar helpers without proper seeding, which can yield predictable outputs on first boot or in minimal-entropy environments such as virtual machines.
In blunt terms, Torvalds wrote that using unseeded random values “is a bad idea” and “just wrong,” stressing that developers should either avoid using randomness in contexts where security matters or guarantee that the RNG has been properly initialized with high-quality entropy. His message reflected broader frustration with ad hoc uses of randomness that can introduce vulnerabilities.
The criticism also touched on “randomized hashing,” a mechanism introduced to harden hash table implementations against denial-of-service attacks that exploit predictable hashing. While randomized hashing depends on unpredictable seeds, misuse or early use before real entropy has been collected can weaken these protections.
Why Seeded Randomness Matters
To understand the stakes, it helps to consider how entropy and pseudorandom generators work. True randomness in software comes from unpredictable environmental noise — such as timing between keystrokes, disk access jitter, or CPU cycle variations. The Linux kernel accumulates this entropy over time into an entropy pool managed by the kernel’s random subsystem.
Once enough entropy has been collected, it is used to seed pseudorandom generators that can efficiently produce cryptographically secure outputs. Without proper seeding, however, the outputs may be predictable. In security contexts — for example, generating cryptographic nonces, keys, or randomized addresses — predictable randomness can be equivalent to no randomness at all, exposing systems to targeted attacks.
This is particularly relevant in virtualized environments and embedded systems, where traditional entropy sources can be scarce during early boot. If a virtual machine starts with an uninitialized or low-entropy pool, early calls to random generators can produce repeatable sequences, a scenario attackers can exploit.
Torvalds’ remark is essentially a call for developers and maintainers to respect the distinction between entropy-safe and unsafe use of random number generators. In his view, merely calling a random function without ensuring that the entropy pool has been properly seeded is sloppy at best and potentially dangerous at worst.
Community Reaction and Kernel Development
The discussion on the LKML has drawn responses from multiple kernel developers, some agreeing that better safeguards are needed, others pointing out the practical difficulties of ensuring high-quality entropy at all times. Some argue for delaying certain operations until the RNG is sufficiently seeded, while others suggest improving documentation and APIs to make improper random use harder.
A recurring theme is that there are legitimate use cases for lightweight random functions where security is not paramount. For example, simple scheduling heuristics or non-security hashing might not require cryptographically secure random values. Distinguishing these from security-critical paths is part of the challenge facing kernel developers.
Some contributors have suggested expanding the kernel’s randomness API to clearly differentiate between safe and unsafe functions or enforcing compile-time checks to warn when insecure randomness calls slip into sensitive code. Others note that modern CPUs with hardware random number generators — such as Intel’s RDRAND or similar features from ARM SoCs — offer an additional source of entropy that can help avoid low-entropy boot traps, but these are not universally available or trusted in all environments.
Security Implications Beyond the Kernel
The broader security community has long cautioned against weak random number generation. Past vulnerabilities — both in operating systems and cryptographic libraries — have stemmed from predictable random seeds leading to compromised keys or replayable tokens. In high-security settings such as TLS key generation, session identifiers or cryptographic handshake nonces, poor randomness has been catastrophic.
Although the Linux kernel’s RNG implementations have evolved over decades and provide sophisticated entropy management, Torvalds’ rebuke underscores that even mature systems require vigilance. Kernel code touches so many critical subsystems — from networking and process isolation to cryptographic operations — that any weak randomness can have outsized impact.
Looking Ahead: Cleaner APIs and Better Defaults
The conversation spawned by Torvalds’ remark is likely to produce both code changes and cultural shifts in how kernel developers approach randomness. Cleaner APIs that make secure randomness easier to use without deep subsystem knowledge could reduce accidental misuse. Similarly, enhanced documentation and auditing tools can help maintainers spot risky calls.
At the same time, upstream maintainers will carefully balance the need for security with practical realities of early boot behavior and performance constraints. Ensuring that essential functionality is not blocked just because high-entropy isn’t yet available will require thoughtful API design and perhaps fallback mechanisms that handle low-entropy conditions safely.
In the end, the kernel community’s response to this critique will shape how Linux handles randomness — one of its most fundamental assumptions about security and unpredictability. Torvalds’ intervention, direct as ever, may spur improvements that benefit countless systems, from servers and desktops to embedded devices and virtualized cloud instances.



Comments