Hacker News new | past | comments | ask | show | jobs | submit login

Boltzmann. But it doesn't really matter, it's the same thing. Yes, I know that looking at a sequence of, say 1000 identical bits looks like it's got just 10 bits of entropy after simple RLE compression. But you must not forget the entropy that also generated in the computation itself, and subsequently dissipated into the universe.



It's not the same thing. If I define a function that always returns 1 then the Shannon entropy is extremely low regardless if the Boltzmann entropy of running it on a CPU is high. That the two measures can be different shows they cannot be the same thing. Related in concept, different in definition. In fact, you can even use the same formulas for calculating it - what differs is what your calculating it on.


> If I define a function that always returns 1…

then it's Kolmogorov complexity is also extremely low.

Look if you have a well enough hash function, it output should be near the Shannon limit and hardly compressible, and ideally contain as much entropy as it has bits. But you can feed in just a single bit or the entire knowledge of humanity, in the end you're going to get a fixed amount of bits, and entropy near of that, and if you throw any form of lossless compression at it, it will hardly compress.

But quantum mechanics tells us, that information cannot be destroyed. So when you feed it more bits, than it emits, then its mostly the entropy of the information you feed in, that you get out of the hash. But if you feed it just a single bit, the additional entropy comes from the computational process.

I know, this is now getting really philosophical, but here's something to ponder on: How would you implement a hash function for a reversible computing architecture?


Most hashes are really good but the point was why replace the perfectly unique information in the cycle counter + time stamp combo with "most likely nearly unique" in the first place. After all, if the former isn't unique then neither are the hashes anyways.

Hashes are EXTREMELY compressible, albeit known algorithms are extraordinarily slow. E.g. I can compress any SHA256 output to a matter of kilobytes, maybe less, by using the SHA256 algorithm as the compressor algorithm and iterating through seeds until I get a match. With true entropy you can't guarantee that for all inputs, regardless of how long you take.

Different types of "information" ate at play here with the different types of entropy as well. If I have a 16 bit hash function and feed it a 64 bit value 48 bits of computational information is lost (at minimum). What happens with the physical information you used to represent the computation after you get the information result is separate from what happens with the computational information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: