Hacker News new | past | comments | ask | show | jobs | submit login

Really interesting, but definitely this has little to do with 'fastest possible data transmission'. It just is a nice way to find the optimal code rate for the current channel. Current systems just adapt the code rate and send all data again. I wonder if something similar exists or could be developed for source encoding, i.e. progressively enhanced video streaming, so that if you want higher quality you would just combine the original with additional data.



> It just is a nice way to find the optimal code rate for the current channel. Current systems just adapt the code rate and send all data again.

Actually, this is likely an improvement on rateless/fountain codes. A key feature of these codes is the ability to reconstruct an entire message block if you collect "enough" subcodes, regardless of order. These actually obviate the need for retransmission and are crucial for things like reliable multicast wireless: it would be a nightmare to keep track of each of your receivers and retransmit lost packets to each one, with rateless codes you theoretically don't even need to know your receivers are there. Just keep spewing out data and eventually they will receive enough to make up for missing blocks.

A casual glance at this paper implies that they've found a model of these codes that takes into account standard models of noisy wireless channels; using this model allows you to optimize your code to have less error correcting overhead (meaning you need to collect fewer packets to successfully reconstruct the original message).

More info, and perhaps a more coherent explanation: http://en.wikipedia.org/wiki/Fountain_code


Doesn't jpeg2000 do that for images (ie. you can get progressively better image with more data read). How much harder is that to extend to video.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: