Hacker News new | past | comments | ask | show | jobs | submit login
Researchers at MIT Develop The Fastest Possible Data Transmission Method (bostinno.com)
44 points by polarslice on Feb 12, 2012 | hide | past | favorite | 10 comments




Thank you for this, it's quite difficult to follow the paper trail to the exact algorithm they're talking about. I had seen a talk about an extension to Reed-Solomon that promised more efficiency and was hoping this would talk about that, instead it looks like a wireless-optimized fountain code.


Really interesting, but definitely this has little to do with 'fastest possible data transmission'. It just is a nice way to find the optimal code rate for the current channel. Current systems just adapt the code rate and send all data again. I wonder if something similar exists or could be developed for source encoding, i.e. progressively enhanced video streaming, so that if you want higher quality you would just combine the original with additional data.


> It just is a nice way to find the optimal code rate for the current channel. Current systems just adapt the code rate and send all data again.

Actually, this is likely an improvement on rateless/fountain codes. A key feature of these codes is the ability to reconstruct an entire message block if you collect "enough" subcodes, regardless of order. These actually obviate the need for retransmission and are crucial for things like reliable multicast wireless: it would be a nightmare to keep track of each of your receivers and retransmit lost packets to each one, with rateless codes you theoretically don't even need to know your receivers are there. Just keep spewing out data and eventually they will receive enough to make up for missing blocks.

A casual glance at this paper implies that they've found a model of these codes that takes into account standard models of noisy wireless channels; using this model allows you to optimize your code to have less error correcting overhead (meaning you need to collect fewer packets to successfully reconstruct the original message).

More info, and perhaps a more coherent explanation: http://en.wikipedia.org/wiki/Fountain_code


Doesn't jpeg2000 do that for images (ie. you can get progressively better image with more data read). How much harder is that to extend to video.


The bottom part of this Quora snwer is a analogy of the technique in the paper for sending physical money which might help you better understand things:

http://www.quora.com/What-is-the-safest-way-to-send-someone-...


I half expected to be reading about quantum entanglement. Rats!


Does this have any bearing on AI or machine learning. According to Marcus Hutter, the best AI is one that can compress the most (the hutter prize for compression). If compression is the same problem as data transmission, than this might an optimal AI algorithm as well.


This isn't about compressing data, it's rather the opposite: expand the data so that even if a certain portion of it is affected by interference, the original data can still be reconstructed without errors.

To take a really simple code, let's say you have this data:

10110

Then you append a checksum (actually a parity bit here) to the data 1+0+1+1+0 = 1 (in binary)

101101

Now let's say there's interference and a bit flips:

100101

The receiver calculates the checksum and sees that the sum is 1+0+0+1+0 = 0, which does not fit since the sum is supposed to be 1. Thus the receiver knows that an error happened and can request a retransmission. More advanced codes like those discussed here can allow arbitrary levels of error correction, instead of only detection of single errors like the parity bit; but higher levels of error correction come with an overhead, and apparently the innovation here is a technique to reduce this cost in wireless networks.


> This isn't about compressing data, it's rather the opposite: expand the data so that even if a certain portion of it is affected by interference, the original data can still be reconstructed without errors.

The two problems (data compression, and noisy channel coding) are tied together quite neatly by information theory though: http://www.inference.phy.cam.ac.uk/mackay/itila/book.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: