Hacker News new | past | comments | ask | show | jobs | submit login

I have a feeling that Kahan lived in a time where matrix inversion and eigenvalue computation was considered the new hotness, just like neural networks today.

It is very easy to build small invertible matrices that cannot be inverted with 32-bit floats, or even 64-bit floats, thus Kahan's insistence on very high precision floating points.

Edit: small invertible matrices




Kahan is a numerical analyst, and probably had extensive experience with worrying about known numerical analysis pitfalls, such as the tablemaker's dilemma, which means you don't necessarily know a priori how much precision you need on the input or intermediate computations to get the desired output precision.

As it turns out, though, most people don't need more than 3 or 4 decimal digits of precision, so while a float may easily accumulate enough error to corrupt that last needed digit, a double tends to be more than roomy enough for almost everybody.


>lived

The man lives still, not to be numbered among the dead:

https://en.m.wikipedia.org/wiki/William_Kahan


Then he must be apoplectic about the new bfloat16 format - only 7 bit mantissa - what the heck

It's totally possible that scientists in the future will laugh at our extremely wide 16-bit neural networks. Computation at the biological synapse level is considered to have an accuracy of 1 to 3 bits only - opinions differ.


I assume you mean small invertible matrices that cannot be inverted with …


> small matrices that cannot be inverted with 32-bit floats, or even 64-bit floats

Presumably these all have eigenvalues near zero and away from zero (i.e. the condition number is large)?


https://en.wikipedia.org/wiki/Hilbert_matrix

"The Hilbert matrices are canonical examples of ill-conditioned matrices, being notoriously difficult to use in numerical computation."


Yes. Ill conditioned matrices are hard to “invert” (more like solve Ax=b for a given A and b) almost by definition. The condition number is basically a measure if sensitivity. If I change b a little, the change in x is proportional to b multiplied by the condition number. If the matrix is very ill conditioned (say more than 1e10 for a double precision algorithm), the matrix is really singular for any practical purpose, so the problem is really ill posed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: