Hacker News new | past | comments | ask | show | jobs | submit login

Had the experience on several occasions in C# on 32 bit. The whole idea of the 80bit operations is flawed in an environment where you can’t control the native code generated with register allocation and so on (so in most high level languages). We became so used to these bugs that we immediately recognized them when some calculation differed on just some machines.

As C# is jit-compiled, you could never be sure what code would actually run at the end user machine, and where the truncation from 80 to 64 bits would occur.

In the end the best cure is to ensure you never ever use x87, which happened automatically when dropping 32 bit support.

Determinism is too important to give up for 16 extra bits.




I feel like I read something once that said when writing numerical/scientific code, traditionally there were so many weird high performance computers that were used, e.g. Crays or whatever, you'd have to be robust to all sorts of different types of FP anyway.

Nowadays maybe that sort of diversity is less of an issue? Expecting determinism in the sense you mean it just seems weird to me.


Having absolute determinism is probably still difficult but using SSE on x64 on Windows, where all users have compatible compilers (I.e determinism without diversity) is at least “good enough” nowadays. I haven’t seen any issues with that scenario so far, even though it’s certainly possible problems can arise.


I think it’s in Goldberg91.


The round to 64 bits (it's a round, not a truncation) never occurs if you use a language type that is 80 bits.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: