Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You should use your own PRNG in that case.

I understand not wanting to change the implementation now, but users should never have assumed it would be stable in the first place.



users should never have assumed it would be stable in the first place.

It's not an assumption. It's directly in the documentation.

"If the same seed is used for separate Random objects, they will generate the same series of random numbers."[0]

[0] - https://docs.microsoft.com/en-us/dotnet/api/system.random


Read just a little further:

“However, note that Random objects in processes running under different versions of the .NET Framework may return different series of random numbers even if they're instantiated with identical seed values.”


I mean, yeah. You probably should. But it's entirely reasonable of a game developer to say "I'm not an expert in random numbers, but Microsoft has lots of smart engineers, I'm sure they did their research and provided a good implementation".

The actual answer is that you shouldn't just provide a default "Random" class, you should provide a more general class with a pluggable algorithm.


No, that’s not reasonable at all. You’d be assuming not only that the implementation is exactly what you want, but also that it will be identical on all platforms and will never change.

In practice for .NET it sounds like that’s actually correct -- the bad implementation will never be fixed. That seems like a bad thing.


Whats the point of providing a seed() function if the algorithm can change from under your feet, for any given implementation? In your scenario the only way to have seed() is through a custom implementation, because any implementation may haves bugs or inconsistencies, that may be fixed at any time. And only your own implementation will stay stable and sane

And this is true for all backward-compatibility concerns: you’ll have a bug, or a poor syntax decision, or a crappy api, thats required to be there because of downstream concerns. If you keep breaking people’s programs to improve the language, people will either eventually stop updating, or stop using the language altogether, because it becomes a massive PITA to get any new features; do it enough and people will say fuck it, you cant be trusted to stay stable, I’ll write it myself. And eventually a library will come along that promises stability, and you’ll be back in the same boat.

Stability is a feature. And judging from how languages treat stability today, and how one of microsofts major reasons for success was its almost obscene adherence to backwards compatibility, it is an important feature.

The cost is of course that these problems persist, and eventually build up untill someone forks, or a major version increments.

But theres a reason that perfect is the enemy of good. Breaking programs arbitrarily to fix bugs/issues slaughters downstream productivity.


I think that is sometimes right and sometimes wrong. It’s not consistent enough to elevate to a principle.

Macs were incredible for backwards-compatibility back in the 80s and 90s, as good as PCs if not better. Games from 1985 would run happily in System 7 and MacOS 8. It didn’t help them win against the PC.

Since the return of Steve Jobs, Apple have become increasingly aggressive about killing off old “obsolete” hardware and software features. As a Mac or iOS developer it can be incredibly frustrating, constantly having to jump through new hoops just to be permitted to stay on the platform. But that doesn’t seem to have hurt Apple’s business success in the slightest.

To answer your initial question--

Whats the point of providing a seed() function if the algorithm can change from under your feet, for any given implementation?

I was imagining that the algorithm would be stable across runs but permitted to change across major library updates, say.

But I forgot there are two parts to it. One is seed(), the other is the no-args constructor that uses the system clock but no additional randomness. Can we at least agree that that one should be fixed? It’s hard to see how any users even could have a hard dependency on that specific implementation. Like, code that absolutely requires independent Random objects created in the same millisecond to have the same seed? Do you see a big risk in breaking clients like that, for the benefit of improving randomness for everybody else?


> Breaking programs arbitrarily to fix bugs/issues slaughters downstream productivity.

In this particular case, though, there is a chance that the bug is what is actually breaking the programs: As mentioned in the GitHub comments, it is possible to produce not-too-contrived simulations which fail completely under System.Random, and for which a fix would make the program less broken.

As long as Microsoft fails to document the brokenness on MSDN, there will be users assuming that the PRNG does what it's supposed to do, and who are at risk at drawing incorrect conclusions on statistics. What they do state in the documentation is the following [0]:

> The implementation of the random number generator in the Random class isn't guaranteed to remain the same across major versions of the .NET Framework. As a result, you shouldn't assume that the same seed will result in the same pseudo-random sequence in different versions of the .NET Framework.

[0]: https://docs.microsoft.com/en-us/dotnet/api/system.random?re...


> I'm sure they did their research and provided a good implementation

Apparently their implementation has several shortcomings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: