I have a little git project called 'cetis' where I throw cute/dangerous ideas for a next-gen language, and actually the idea of eval/apply as the "Maxwell's equations of software" is in there due to a talk by Gerald Jay Sussman which is transcribed. He says something to this effect: "Look at what these equations say. They say that a changing magnetic field causes a changing electric field, and a changing electric field causes a changing magnetic field. Look at what we have -- eval calls apply, apply calls eval."
So, cool, but why not push this metaphor further? That's the dangerous idea.
The dangerous idea is that Maxwell's equations don't use voltages. We love voltages. They're useful everywhere. In Einstein's special relativity, once you invent voltages (and their magnetic counterpart, the vector potential) all four Maxwell equations stop being separate, they all take the same form. So you tack the charges ρ onto the current j to get the 4-current Jⁿ = (ρ c, j). You tack the voltage V onto the vector potential a to get the 4-potential Aⁿ = (V/c, a). Then the entire set of Maxwell's equations becomes a single wave equation of your voltages emanating from your currents:
∂ᵢ ∂ⁱ Aⁿ = μ₀ Jⁿ
There are four of them, one for every n, but they're all the same form. If you understand one, you understand them all. There are a couple conventions -- these are summed over i, and if you really wanted to know the kinematics you would want to know the fields -- and those are instead:
Fᵤᵥ = ∂ᵤ Aᵥ − ∂ᵥ Aᵤ
The space-to-time components of the field matrix are the electric field, the space-to-other-space components of the F tensor are the magnetic field; the minus sign guarantees that there are no space-to-same-space or time-to-time components; those are all zero.
Now, before this becomes a rant, here is the idea. Is there a way in which apply can look like eval, and eval can look like apply? If Einstein is a lesson, we might have to invent a means of programming where you look directly at invariants which you want to create.
The goal is that you should just specify an inhomogeneity, a little piece of data and some information on how it should be. The computer then constructs the invariants through some sort of "computational wave" -- and from this the computer can derive the actual field, the instructions needed to actually run the process on a computer.
There are some tantalizing suggestions and possibilities. The Meta II compiler (recently extended in JS as the OMeta project, which is definitely on my source-code reading list) works by realizing that a bunch of different stages in compilation are all just the same idea of pattern matching.
Perhaps one day we'll do what we do with photomultiplier tubes: we just visualize the voltages that we'd need to focus and accelerate electrons into the metal plates, and not even pay attention to the lower-level picture of how these fields are being generated to do the right thing.
This is brilliant and well explained - I saw where you were headed just before you got there. And had a little clap of delight to see my suspicions were correct.
However, don't you think the reason for the success of Maxwell is because the objects it describes are well behaved and can be completely described by a set of partial differential equations? Where as the constituent atoms (hehe) of software are themselves complex objects - more than just point particles whose state can be completely described by say a wave function. And their behvior effectively solvable by simple deduction from physical postulates and the axioms of mathematics. I believe I saw a story about Feynman deriving a bunch of partial differential equations to predict the behavior of a program. But I do not think such an approach is scalable.
But the core of your idea I believe is that by specifying invariant in some theory which encapsulates the possible evolution of all possible programs then you can write programs far more simply? Essentially the next level of a synthesis of programming with unit tests, data and automatic programming. I believe the main impedance to this (I think) is that any such general evolution function would be incomputable. And where as with Maxwell equations the space is limited and the boundaries defined and embodied and easily deductively traversed, for a program the search space would be massive and difficult to search effectively. In effect, I believe you have just specified an AI complete problem.
Notice also that even a classical theory like general relativity yields few analytic solutions and numerical simulations are expensive. Perhaps this hints at why a unified theory is so difficult to come by? While the constituents are still simple particulates, the size of a theory/program which encompasses in totality all possible behaviors in nature might require a chain of reasoning so deep it lies beyond the ken of unaugmented humans. I believe a unified theory of efficiently searchable programs would be even more difficult - such a theory would yield a unified theory of nature as a special case.
I remember seeing a program that would output Haskell functions given just their type signatures. If you think of type signatures as invariants, this is exactly what drostie described. Alas my Google-fu was unable to find a reference to it.
I have considered this road, see this LtU discussion to see why this approach though wonderful, is still too limited for general programming. http://lambda-the-ultimate.org/node/1178
Essentially, as is found in many automatic program derivation attempts, recursion explodes complexity.
Nice idea, but i don't believe that eval and apply are the two sides of the coin. In the article, apply is just a part of eval, which was originally called evalquote.
So, cool, but why not push this metaphor further? That's the dangerous idea.
The dangerous idea is that Maxwell's equations don't use voltages. We love voltages. They're useful everywhere. In Einstein's special relativity, once you invent voltages (and their magnetic counterpart, the vector potential) all four Maxwell equations stop being separate, they all take the same form. So you tack the charges ρ onto the current j to get the 4-current Jⁿ = (ρ c, j). You tack the voltage V onto the vector potential a to get the 4-potential Aⁿ = (V/c, a). Then the entire set of Maxwell's equations becomes a single wave equation of your voltages emanating from your currents:
There are four of them, one for every n, but they're all the same form. If you understand one, you understand them all. There are a couple conventions -- these are summed over i, and if you really wanted to know the kinematics you would want to know the fields -- and those are instead: The space-to-time components of the field matrix are the electric field, the space-to-other-space components of the F tensor are the magnetic field; the minus sign guarantees that there are no space-to-same-space or time-to-time components; those are all zero.Now, before this becomes a rant, here is the idea. Is there a way in which apply can look like eval, and eval can look like apply? If Einstein is a lesson, we might have to invent a means of programming where you look directly at invariants which you want to create.
The goal is that you should just specify an inhomogeneity, a little piece of data and some information on how it should be. The computer then constructs the invariants through some sort of "computational wave" -- and from this the computer can derive the actual field, the instructions needed to actually run the process on a computer.
There are some tantalizing suggestions and possibilities. The Meta II compiler (recently extended in JS as the OMeta project, which is definitely on my source-code reading list) works by realizing that a bunch of different stages in compilation are all just the same idea of pattern matching.
Perhaps one day we'll do what we do with photomultiplier tubes: we just visualize the voltages that we'd need to focus and accelerate electrons into the metal plates, and not even pay attention to the lower-level picture of how these fields are being generated to do the right thing.