The author acknowledges early on that `Random.secure` is there to provide randomness where it needs to be secure, but then spends the entire article fretting over how the other source of randomness isn't. But it's not supposed to be!
Applications have plenty of reason for cheap, stable, or otherwise controlled forms of randomness and shouldn't have to pay the penalty for -- or face the reproducibility limitations of -- secure randonmness when that's not what they need. It's both useful and traditional for standard libraries to either include both forms, or to leave the secure form for other libraries to contribute.
If an application uses the wrong supply of randomness for their needs, that's an application error. And if developers writing security-sensitive code don't know to think anticipate this distinction and avoid the error, then that sounds like there was a project management error in assigning them such a sensitive task.
Well the article still has a point in that the insecure PRNG is neutered for seemingly no reason, contrary to developer expectations.
But I think we should, as language developers and users, be well beyond the point of pushing out and accepting insecure defaults with a little documentation disclaimer - especially if even the Dart developers themselves can't catch misuse of their own insecure APIs in widespread tools.
At the end of the day, we don't live in a computing world where true randomness is that resource-intensive anymore, and user-facing applications should basically always be relying on it. Non-randomness should really be the opt-in, when developers can actually justify that the performance penalty is a problem. Otherwise, I don't need library authors prematurely optimising my programs at the cost of security.
(There is a similar related discussion to have about the pervasiveness of floating points in programming languages when most applications would be better served by a more expensive, but more precise, numerical representation - since they rarely do that much calculation.)
It’s not random, that’s the point. If you neuter it so much that it’s only useful for a specific use case then make sure it’s named something relevant and specific related to that use case!
The author acknowledges early on that `Random.secure` is there to provide randomness where it needs to be secure, but then spends the entire article fretting over how the other source of randomness isn't. But it's not supposed to be!
Applications have plenty of reason for cheap, stable, or otherwise controlled forms of randomness and shouldn't have to pay the penalty for -- or face the reproducibility limitations of -- secure randonmness when that's not what they need. It's both useful and traditional for standard libraries to either include both forms, or to leave the secure form for other libraries to contribute.
If an application uses the wrong supply of randomness for their needs, that's an application error. And if developers writing security-sensitive code don't know to think anticipate this distinction and avoid the error, then that sounds like there was a project management error in assigning them such a sensitive task.