Hacker News new | past | comments | ask | show | jobs | submit login

> People are able to interpret speech under very high levels of noise due to having strong priors / being able to guess well / having shared state with the speaker.

In the information theoretic sense, this just means there is enough redundancy in the signal that you can error correct the noise. If you’re in a noisy room talking to a coworker, your shared context and priors are really redundant bits of information. If she says something about “nachoes” but you couldn’t tell if she said “nachoes”, “tacos” or “broncos”, the nachoes next to you add redundancy to the signal.

So in that context, what you’re saying is that we should take advantage of any redundancy we can find in an incoming signal.




The problem is there is no real way to estimate redundancy of a real signal of specific form, or more importantly discerning between likely alias vs likely true signal given a signal with unknown statistics. The statistical approach might work for simple clean signals but makes critical mistakes in case of complex ones. Specifically you get to estimate true phase and true magnitude in a given subband...


Yes, agreed. My point was to basically say that priors (as a form of redundancy) do not undercut the sampling theorem.


Or, in the time domain, if you talked a lot about nachoes recently.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: