Hacker News new | past | comments | ask | show | jobs | submit login
Reversible Computing (2016) [video] (youtube.com)
40 points by jonbaer on Dec 26, 2017 | hide | past | favorite | 20 comments



Billiard Ball cellular automata, proposed and studied by Edward Fredkin and Tommaso Toffoli, are one interesting type of reversible computer. The Ising spin model of ferromagnetism is another reversible cellular automata technique.

https://en.wikipedia.org/wiki/Billiard-ball_computer

https://en.wikipedia.org/wiki/Reversible_cellular_automaton

https://en.wikipedia.org/wiki/Ising_model

If billiard balls aren't creepy enough for you, live soldier crabs of the species Mictyris guinotae can be used in place of the billiard balls.

https://www.newscientist.com/blogs/onepercent/2012/04/resear...

https://www.wired.com/2012/04/soldier-crabs/

http://www.complex-systems.com/abstracts/v20_i02_a02.html

Robust Soldier Crab Ball Gate

Yukio-Pegio Gunji, Yuta Nishiyama. Department of Earth and Planetary Sciences, Kobe University, Kobe 657-8501, Japan.

Andrew Adamatzky. Unconventional Computing Centre. University of the West of England, Bristol, United Kingdom.

Abstract

Soldier crabs Mictyris guinotae exhibit pronounced swarming behavior. Swarms of the crabs are tolerant of perturbations. In computer models and laboratory experiments we demonstrate that swarms of soldier crabs can implement logical gates when placed in a geometrically constrained environment.


I should point out that much of Quantum Computing is based on the foundation of reversible computing, so if you wish to get into quantum computing at any point, it is helpful to become familiar with reversible computing.


The video discusses reversible computing but doesn't answer any questions how to accomplish it. The subject is misleading.


Output noise in addition to intended output. The 'noise' is the information discarded by the logical operation and must provide enough detail to recreate the input conditions from the output. So if an AND gate and the result is 0, you need the noise to contain enough information to indicate whether one of the inputs was 1 or not.



Thanks for the link. After reading up on reversible for an hour after seeing this story, I'm still trying to figure out in an intuitive way why physical reversibility will not lead to energy loss. Or to put it another way, why does bit Erasure end up in energy expenditure. I understand why from a theoretically derived perspective that entropy increases, but I haven't been able to make the connection with the physical world.


Richard Feynman published a lesser-known set of lectures on the subject of computation and he deals with this. The computational system he depicts is one in which 'computation' occurs as a intentionally setup quantum system falls to its ground state. It might take an arbitrarily long amount of time, but the computation will occur with zero energy loss. In order to get the result more quickly you can input energy and spur along the process. He also deals with the theoretical, and some of the practical, considerations surrounding reversible computing (which he stresses as critically important for zero-energy-used computing). I've always just considered it a matter of if X bits of information enter into a system and you are only preserving Y bits of them... the others have to go somewhere. Information is as physical as anything else, even though it can take many forms. We usually end up dumping it as heat, but we could have it be vibration or RF or maybe some weird geometry morphing (on the quantum scale I mean) all the same.


> why does bit erasure end up in energy expenditure

Following this explanation from the link:

"Theoretically, room‑temperature computer memory operating at the Landauer limit could be changed at a rate of one billion bits per second with energy being converted to heat in the memory media at the rate of only 2.85 trillionths of a watt (that is, at a rate of only 2.85 pJ/s). Modern computers use millions of times as much energy."

I understand that to flip a 1 to a 0 it is necessary to dissipate that energy into heat.

Edit: But also I'm not sure how reversibility avoids that.


There's some interesting explanation about the thermodynamics of reversible cellular automata in these papers:

INVERTIBLE CELLULAR AUTOMATA: A REVIEW. Tommaso Toffoli and Norman Margolus. MIT Laboratory for Computer Science.

Because of this "information-losslessness" (ach!), ICA automatically obey the second principle of thermodynamics and, more generally, display a full-featured statistical mechanics analogous to that of Hamiltonian systems. As additional structure is introduced (for instance, particle conservation), macroscopic mechanical features such as elasticity, inertia, etc. naturally emerge out of statistics itself. In sum, once we make sure that it is conserved, information has an irresistible tendency to take on a strikingly tangible aspect (cf. [73]) to materialize itself.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.27....

When -- and how -- can a cellular automaton be rewritten as a lattice gas? Tommaso Toffolia, Silvio Capobiancob, Patrizia Mentrastic.

https://www.sciencedirect.com/science/article/pii/S030439750...

Reversible computing and cellular automata - A survey. Kenichi Morita.

https://www.sciencedirect.com/science/article/pii/S030439750...

On Invertible Cellular Automata. Karel Culik II.

http://www.complex-systems.com/pdf/01-6-1.pdf

One of the tripper far-reaching (to the end of the universe) applications of reversible computation is Tipler's "Omega Point," which he wrote about in "The Physics of Immortality".

https://www.wired.com/2002/12/holytech/

https://en.wikipedia.org/wiki/Omega_Point

https://en.wikipedia.org/wiki/Frank_J._Tipler#The_Omega_Poin...


Tipler's Omega Point prediction doesn't seem like it would be compatible with the expanding universe, would it? Eventually everything will disappear over the speed-of-light horizon, and then it can't be integrated into one mind.


It also wishfully assumes that the one mind can't think of better things to do with its infinite amount of cloud computing power than to simulate one particular stone age mythology.

Then again, maybe it's something like the 1996 LucasArts game Afterlife, where you simulate every different religion's version of heaven and hell at once.

https://en.wikipedia.org/wiki/Afterlife_(video_game)

The primary goal of the game is to provide divine and infernal services for the inhabitants of the afterlife. This afterlife caters to one particular planet, known simply as the Planet. The creatures living on the Planet are called EMBOs, or Ethically Mature Biological Organisms. When an EMBO dies, its soul travels to the afterlife where it attempts to find an appropriate "fate structure". Fate structures are places where souls are rewarded or punished, as appropriate, for the virtues or sins that they practiced while they were alive.


You might want to take a physics class on thermodynamics, because reversibility also avoids energy/entropy dissipation in physical processes as well.

BTW, bit-flipping is a reversible operation!


After reading the wikipedia page on Landauer's principle, I'm left with some vague questions.

If irreversible computations have an effect on entropy in the environment, does transmitting a bit so it is not lost have an interesting effect too, even if it is done with a conventional computer and not a reversible one?

The amount of data transmitted over the internet has reached over a zettabyte per year - does thermodynamics tell us anything interesting about the consequences? Yes, it produces heat, but beyond that...


To truly benefit from this theoretical advantage, reversible computing requires reversibility at all levels, including the electronics and the program itself. Simply retransmitting a bit requires fan-out which is itself forbidden in reversible circuits, so it doesn't buy you anything.


Why does making a computer program "reversible" lower the amount of energy needed to run it?


When you do an irreversible computation, the “destroyed” information has to go somewhere, producing an increase in entropy, which has a cost typically paid in energy. Expel less information, expend less energy.

We usually expel our waste bits as heat, so my understanding is that reversible computers could also be run without so much need for cooling.


I think I'm still missing how it saves energy. Does it require the non-discarded results be kept in memory and reused? Like caching the result of function calls? Generating the result of the operation in the first place generates heat as I understand it.


Yes, it requires space to store the state required to run the program in reverse, which effectively heats up the space surrounding the computer.

Billiard Ball Cellular Automata conserve mass: particles are never created or destroyed, they just bounce off of each other and static "bumper" cells. Computing by "smoke and mirrors"! Or you could think of them as electrons, but BBCA don't have "wires" like other logical CA like JVN29, just empty space and gas and bumpers.

So you can't have a "not" gate, because it would have to make billiard balls out of nowhere if the input were zero (no billiard balls => many billiard balls). And you can't destroy billiard balls either, so there has to be somewhere for them to go after they're not needed any more. Like sending them flying out into the environment -- effectively generating heat!

When you change the direction of time, they come flying back in from deep space (or bouncing back through a chaotic atmosphere, depending on the temperature of the "environment"), and run the computation in reverse.

The Fredkin Gate and the Toffoli Gate are universal reversible logic gates: you can construct any logical expression with them, and they don't create or destroy billiard balls.

https://en.wikipedia.org/wiki/Fredkin_gate

>The Fredkin gate is a circuit or device with three inputs and three outputs that transmits the first bit unchanged and swaps the last two bits if, and only if, the first bit is 1.

https://en.wikipedia.org/wiki/Toffoli_gate

>[The Toffoli gate] is also known as the "controlled-controlled-not" gate, which describes its action. It has 3-bit inputs and outputs; if the first two bits are set, it inverts the third bit, otherwise all bits stay the same.

>Universality and Toffoli gate: Any reversible gate that consumes its inputs and allows all input computations must have no more input bits than output bits, by the pigeonhole principle. For one input bit, there are two possible reversible gates. One of them is NOT. The other is the identity gate, which maps its input to the output unchanged. For two input bits, the only non-trivial gate is the controlled NOT gate, which XORs the first bit to the second bit and leaves the first bit unchanged.

https://en.wikipedia.org/wiki/Pigeonhole_principle

Here's a cool silent video by John Walker (Fourmilab, Autodesk) explaining Billiard Ball Cellular Automata in CelLab, and a link to the documentation!

https://www.youtube.com/watch?v=esgn0Dz8SOA

>It is possible to perform all logical operations by collisions of perfect billiard balls with fixed objects and one another. A CelLab ( http://www.fourmilab.ch/cellab/ ) alternating lattice rule ( http://www.fourmilab.ch/cellab/manual/rules.html#Bbm ) models a billiard ball universe where collisions are perfect (albeit following different rules than balls on a table), and the system is reversible.

There are reversible and non-reversible variants of Billiard Ball Cellular Automata, and you can compose reversible CA to get other reversible CA. You can make reversible gas in "random" brownian motion, by changing the direction of the particles "randomly" based on a reversible pseudo-random number generating cellular automata run in parallel. For example, you could use "time tunnel" seeded with randomness as a pseudo random number generator: https://www.youtube.com/watch?v=FMwHG_8-pOU

But once you compose it with a non-reversible rule, you can't go back! To simulate "diffusion-limited aggregation" based on the brownian motion gas, you have to break reversibility by defining a "frozen" particle that freezes other particles that touch it. That breaks reversibility because there are several ways to get into the same frozen state, and it has no way to reconstruct which of those states preceded the frozen state, or how long it has been frozen. But it looks cool, like ice crystals nucleating around a speck of dust on a window pane!

https://en.wikipedia.org/wiki/Diffusion-limited_aggregation

https://github.com/SimHacker/CAM6/blob/master/javascript/CAM...

https://www.youtube.com/watch?v=LuvIBOTReFE

JVN29: https://en.wikipedia.org/wiki/Von_Neumann_cellular_automaton


This doesn't really seem to address my question directly?


Norman Margolus described some of his ideas about simulating physics, programming lattice gasses, parameterizing reversible rules, playing and learning with interactive CA:

>In your todo list, my favorites are the features that make the play more interactive, such as tools for constructing initial states, and anything that makes it easier for users to define their own CA rules. For interesting rules to play with, most of my favorites are in the paper Crystalline Computation , though I do have some additional favorites since that paper was written.

Crystalline Computation: http://people.csail.mit.edu/nhm/cc.pdf

>The rules that Tom and I are most attached to are the ones that are closest to physics, and those are mostly reversible lattice gases (or their partitioning equivalents). I would also say that lattice gases provide the best model both for simulation efficiency using machines with RAM, and the simplest and most flexible mental model for constructing interesting local dynamics. This may be a bit of a distraction for you, but my best understanding of how to efficiently simulate lattice gases is discussed in the paper, An Embedded DRAM Architecture for Large-Scale Spatial-Lattice Computations . I believe the technique discussed there for embedding spatial lattices into blocks of DRAM is also the best for efficient simulation on PC's.

An Embedded DRAM Architecture for Large-Scale Spatial-Lattice Computations: http://people.csail.mit.edu/nhm/isca.pdf

>My perspective is that CA's (and particularly lattice gases) are a modern and accessible version of classical field theories. Physicists have long used them as toy models to understand real physical phenomena, but their importance goes much deeper than that. It doesn't seem to be widely appreciated that most of the quantum revolution consisted of exposing the finite state nature of Nature! We looked closely at what we thought was a continuous world and we saw ... pixels! There is also some inscrutable screwiness that came along with that, but if you focus on just the finite-state part of the new physics, reversible lattice gases should be considered fundamental theoretical models. To quote from the end of my "Crystalline Computation" paper,

>"Although people have often studied CA’s as abstract mathematical systems completely divorced from nature, ultimately it is their connections to physics that make them so interesting. We can use them to try to understand our world better, to try to do computations better—or we can simply delight in the creation of our own toy universes. As we sit in front of our computer screens, watching to see what happens next, we never really know what new tricks our CA’s may come up with. It is really an exploration of new worlds—live television from other universes. Working with CA’s, anyone can experience the joy of building simple models and the thrill of discovering something new about the dynamics of information. We can all be theoretical physicists."

>So I think it's important here that people can explore new rules, not just play with initial states. New rules create new universes that no one has ever seen before. As a way to make this more accessible for casual users, we could provide some parameterized reversible rules, where many values of the parameters give interesting universes, and let them explore. That's a bit more advanced than just playing with simulations from the CAM-6 book, but people will want to do both. The actual CAM-6 machine was very much a universe-exploration device, with a specialized language that let you write rules easily, and pressing single keys during a simulation let you start and stop the dynamics and run backwards and zoom in on the middle, and shift the entire space (arrow keys) and change the middle pixel or insert a small block of randomness in the middle, and "stain" the rendering in different ways to bring out different features. The actual hardware was lookup-table based, so all rules ran at the same speed. Each experiment defined some extra keys of its own (Capital letters were reserved for the experimenter, lower case and non-letters for the system. Adding "alias X <comment>" after a function definition attached it to the X key. Numbers pressed before pressing X would be available to the function as a parameter. Pressing "m" gave a menu of the keys the experimenter defined, and their <comment>s). Tommaso thought of it as being like having various keys and stops on an organ that let you make music. A GUI-centric interface might be better today, though single keys still have the advantage that you don't have to take your eyes off the simulation. Have you ever played with the hardware CAM-6? If you can find a vintage 286 or 386 machine, you could plug one in (I still have some somewhere) and play a bit, to get a better feeling for the spirit of it. I also still have a working CAM-8, but I think 3D lattice-gas simulations are out of scope for the moment. If, however, you can implement 2D lookup-table lattice-gas computation efficiently in JavaScript, that might be a good way to emulate the CAM-6 experiments, and make general lattice gases easy and efficient to play with. We defined and implemented a bunch of the CAM-6 experiments as very simple lattice gas experiments on CAM-8.

>As you can see, my personal inclination would be to emphasize the reversible stuff in the CAM-6 book, and the Crystalline Computation paper. And tell people they are being theoretical physicists!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: