Hacker Newsnew | past | comments | ask | show | jobs | submit | earlbellinger's commentslogin

This is very exciting. I worked on these kinds of technologies a bit in graduate school (before switching gears completely and getting a PhD in astrophysics). In the coming decades, we will be able to detect who is on the path to disease before they ever begin showing any symptoms at all. Granted, curing these diseases post-detection is a whole different matter.


For problem (b), this is the standard case for inverse problems, which are almost always ill-posed / ill-conditioned. There is now a vast literature on this subject, including many algorithms for solving them robustly.

"...incorrectly posed problems were considered ‘taboo’ for generations of mathematicians, until comparatively recently it became clear that there are a number of quite meaningful problems, the so-called ‘inverse problems,’ which are nearly always unstable with respect to fluctuations of input data." — H. Allison, Inverse Unstable Problems and Some of Their Applications (1979)


Hmm... I'm not sure I fully agree. Fundamentally, an ill-posed problem maps a large input space to a very small output space, which means that inverting it is impossible under any noise (in a very similar way to, say, the Nyquist-Shannon theorem)—unless you know something more about the noise (e.g., concentration phenomena in differential privacy) or the underlying signal (e.g., sparsity in compressed sensing), or you don't mind the noise in the parameters so long as the output signal is well-represented by your parameters (generally anything in ML).

A ton of these cases which are "usually" ill-posed, as in compressed sensing, physical design, etc., have some more structure which is encoded as regularizers or priors. For example, in compressed sensing, we know the input signal is sparse, so even though we have far less samples than the Nyquist rate would require, we know something else about the input signal, usually encoded as an l1-regularizer or other kind of sparsifying regularizer, making the resulting optimization problem well-posed.

That being said, I'm not sure how this immediately plays into (b). My claim is that you're pretty much hosed unless you increase the general precision/range of the algorithm, which is independent of the noise in the input, but rather depends on the actual implementation of the algorithm and the atoms the algorithm uses (sums, multiplications, etc., of two floating point numbers, for example). The case being that you have to increase the precision of the atoms which the algorithm uses (e.g., like in the article, switching to arbitrary-precision fractions when sums of large numbers are needed).

---

NOTE: I guess more generally, I'm defining ill-conditioning/numerical instability as a problem with the algorithm in the sense that, for a fixed input precision, perfect arithmetic (the "correct" result) will differ heavily from the result with truncated arithmetic (with floats, or whatever it may be).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: