Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've had my eyes on Adrian Thompson's genetically evolved FPGA circuits for a while now - they do amazing things very economically and exploit analog circuit properties. So I've always wondered what if we make unreliable but super tiny atomic level programmable gates where we know the unreliability stems from quantum fluctuations, and then evolve circuits over millions of generations (A.T. ran thousands) to see if they manage to start exploiting the quantum effects.

PS: I don't have the expertise to make a strong argument here, but it seems like an intriguing idea. Anything fundamentally against it?



'Quantum effects' are already exploited in analog components. TFET's work by modulating quantum tunneling. Zener reverse breakdown in Zener diode is also quantum effect. Esaki diode uses quantum effects.

Unfortunately just because the single component relies on quantum effects does not have anything to do with quantum speedup in computation.

Quantum computation exploits entanglement in larger scale than normal. The whole computational state must be entangled quantum state. Separated quantum effects result just classical computer. The quantum circuit must be carefully arranged so that the interference pattern yields the result you want.


Well, trivially that's a yes. However, the "specification level breach" property of A.T.'s work is what intrigued me - i.e. the circuits are specified to do a digital task, but work in an analog manner - constructing antennas and receivers - to the extent that the same digital circuit wouldn't work when written to another FPGA.

Also, entanglement doesn't need to be perfect. You can have 1% entanglement too and have that propagate over time and operations. The question is whether an evolved circuit can figure out pathways to use that little bit of entanglement in ways that our understanding doesn't quite admit .. in much the same way as a digital FPGA designer wouldn't think about using not-gates as antennae.

An unreliable circuit achieved this way would also be interesting I think.


Entalgement is very fragile. You can't keep even "little bit" of entanglement in any normal temperature.


The main problem is that errors accumulate and your circuits are extremely "anti-robust".

If you put an antenna or something next to it there was a good chance it would stop working properly for example.


Most likely yes. But, as I noted in another comment, that would still be interesting since we can test for entanglement on a larger scale. I'm kind of expecting error correction to "evolve" in the iterations. Robustness can come later.


I've worked a decent bit with stochastic search (not necessarily just EAs/GAs but also Metropolis-Hastings and extensions of MH) and the search process tends to favor probabilistic and inaccurate individual units but gangs many of them together for reliability.

This is wholly different from how we view computer programs today. It may work, but you'd better have a good application in mind, otherwise you'll get laughed/shooted out of the room.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: