chacha12, a construction that's been around in one form or another since the '00s, runs at well under 1 cycle per byte on good hardware and still plenty fast on "bad" hardware https://bench.cr.yp.to/results-stream.html (iiuc just using SIMD, no special acceleration)
But it doesn't really matter what things were like when the code was first written, it's about how it could be fixed in the present.
Video games do not generate large streams of data, they generate individual values on demand. Your link says, to generate 8 bytes, chacha12 on modern hardware needs 24-45 cpb. That's 192-360 cycles to generate enough bits for a pseudorandom double-precision floating-point number. Xoshiro256+ [1], a relatively high-quality generator for this purpose, can do it with 11 single-cycle ALU ops. So unoptimized xoshiro256+ should be 17 times faster than optimized chacha12 on the best hardware. This is a classic latency vs. throughput issue.
Now, maybe you could optimize the use of a CSPRNG here by filling large buffer(s) and sampling values from them. Some warm-up time could go a long way. However, I fear that you would run into one or more of the following problems:
- stop-the-world pause to refill the buffer (e.g. single buffer, no threading)
- synchronization delays from mutex locks (e.g. ring buffer refilled from a background thread)
- high memory usage (e.g. rotating pool of buffers, atomically swapped)
Needless to say, none of these solutions is anywhere near as simple to implement as a non-cryptographic PRNG.
Now let's consider determinism. Video games generally use a lot of differently seeded instances of the same PRNG algorithm to provide random numbers in different parts of the simulation. Since each part may demand random numbers at different rates, it's hard to replace several independent PRNGs with a single PRNG without compromising determinism. In the 4096 bytes necessary to run one instance of chacha12 at its maximum efficiency, you can fit 128 instances of xoshiro256+ or 512 instances of splitmix64 [2].
Generating random numbers into a refillable buffer is the standard for high-performance RNG, whether you're doing CSPRNG or not (it's how V8 uses xorshift, for example). "stop the world" is not a problem when the time to refill the buffer is still measured in nanoseconds.
There's nothing stopping you from having multiple CSPRNG instances, with deterministic seeds, if that's a design requirement.
I ran a microbenchmark with Go 1.22, which ships with both ChaCha8 (a little faster than ChaCha12) and PCG (a little slower than xoshiro256+). On my M1 Mac, I'm seeing 2.3 ns/uint64 for PCG and 4.5 ns/uint64 for ChaCha8. Assuming 3.2 GHz clock speed, that puts them at about 1 and 2 cpb respectively. With a floating-point conversion in the mix, they ran at 4.0 and 4.5 (!) ns/float64 respectively. That's far better than I would have thought.
So yes, a good, slightly buffered (internal state seems to be 300 bytes) implementation of a (quasi-?)CSPRNG is pretty damn close to a decent quality non-cryptographic PRNG and likely fast enough for most video games on most hardware. Though, very few people write games in Go.
That's not quite a correct benchmark if I'm understanding what you benchmarked. Xoshiro and PCG need a chain of serial ops to generate a number, but not a lot of parallelism, letting the core do other stuff while it is working. ChaCha needs a lot of parallelism, and you are essentially saturating that core running it.
The builtin benchmarking tool in Go runs with parallelism set to 2x the number of cores. That should make evident any code that scales poorly. It is a tunable parameter though; I can try 4x cores etc.
I also ran it on x86 (AMD Ryzen 5600X) and got similar results; everything ran faster on a 4.6 GHz chip, but ChaCha8 stayed around 2 cpb while PCG improved a little to about 0.7 cpb.
I do have a ~10-year-old Celeron NUC laying around that I could use to test older/lower-end hardware. I can also publish the (admittedly very simple) program I'm using.
It's also worth noting that cpb is still a measure of average throughput not variance or latency and while Go's buffered ChaCha8 implementation amortizes to 2 cpb, when the 32-element buffer exhausts, generating a new number is much more expensive than the previous 31 numbers. I wouldn't say the performance is good enough for every use case.
Ok, I was able to figure out a few more things. First of all, the benchmark runner does not necessarily exploit the parallelism of GOMAXPROCS out of the box. Second, GOMAXPROCS seems to default to 1x logical cores; I last ran it on a machine where logical cores = 2x physical cores. I adjusted the benchmark code to use RunParallel and adjusted the parallelism of each run.
Testing on Celeron J3455 @ 1.5 GHz (4 physical and logical cores) gave me PCG at 1.2 cpb and ChaCha8 at 2.6 cpb with cpu=1, but PCG stayed relatively constant across cpu=1,2,4,8 (worst was 1.8 cpb) while ChaCha8 slowed to 6.5 cpb at cpu=2 and 7.5 cpb at cpu=4 and cpu=8.
Back on my M1 Mac (8 logical and physical? cores), both ChaCha8 and PCG generally got better with more cores. ChaCha8 got down to 0.76 cpb at cpu=4 (then regressed a bit at cpu=8) while PCG got down to 0.26 cpb at cpu=8.
I don't think any of these results rule ChaCha8 out completely, though again I'm looking from the perspective of video games, which generally monopolize a machine while running.
The important question to benchmark for small buffer sizes isn't actually cycles per byte, it's micro-ops per byte. You are sort of getting there by adding threads, but if you want to measure micro-ops per byte by measuring cycles per byte, your benchmarking code should run a number of parallel implementations of the generator on each thread (and stick to 1 thread per physical core, too). Each generator will have a "roof" where more generators per thread doesn't cost any more in terms of cycles/byte. I am assuming that for PCG, that roof is around 4-ish on an old CPU, and may be as high as 6 or 8 on your macbook, while the roof for ChaCha will be close to 1.
ChaCha exploits instruction-level parallelism to get speed. PCG doesn't - it has a chain of instructions that must be executed sequentially. That means that when the PCG generator executes, it leaves gaps in the instruction stream that can be filled with other instructions for the game. That means a slowdown in the game that is more significant than what your benchmark suggests.
I'm going to do this and write blog about it (although I don't have a macbook), so we may be able to compare results.
There are CSPRNG constructions based on AES that would probably be appropriate, but they are still a huge performance hit. PCG, Xorshift, Xoshiro, and the like are all about 50x faster than the fastest CSPRNGs in sustained throughput, and also have smaller state and much less spinup time.
But it doesn't really matter what things were like when the code was first written, it's about how it could be fixed in the present.