Err, his example is misleading - that's the problem with generalizing and ignoring nuances. The problem here is purely syntactic. Any functional language worth its salt supports lexical scoping, which means symbols in different scopes can shadow each other. So, I can easily say something like this:
if a > 0
then let a = a + 1
in # do smtg with a
else # do smtg with a
Of course the inner a within the let clause is a "different" a, because outside of the let clause the old a remains. This doesn't have to be slow either, because if the compiler detects that the outside a isn't used anymore, it can simply increment a memory location or a register.
With a little bit of syntactic support from the language, you could easily write this in a natural, comfortable form, which will then automatically get compiled to the code above. In Haskell, it's pretty easy to do that with monads:
do $ when (a > 0)
a <- a + 1
(I haven't touched Haskell in a while so there might be some minor syntactic issue here, but the overall point still stands). The point here is that the number of potential branches can be determined at compile time, which makes the problem syntactic.
A bigger problem is when you're dealing with iteration, and you can't know at compile time how long the loop is. Then you can't unroll it, which means the compiler implementors can't use the technique above. But they can use a different technique - recursion, which can later be compiled back to iteration via tail calls optimization, removing any awkwardness. Again, with the do monad, it's trivial in Haskell.
Of course the question is, if you go through all the trouble to build up the beautiful, elegant mechanics that make it possible to write imperative code that gets compiled to purely functional code, that gets compiled back into imperative machine code for efficiency - what's the point? IMO the point is mathematical beauty - something you can't easily put a price tag on. There are some empirical benefits as well, but I'm not sure if they outweigh the troubles. Anyway, there is a great argument here, but the author only scratches the surface of the problem and doesn't dig nearly deep enough to get to the meat of the problem.
That example actually appears fairly regularly in Erlang code.
A1 = ...
A2 = ...
And it leaves a bad taste in one's mouth.
I see putting stuff in a variable as a way to 'take a breather' in the middle of code:
A = some(hairy(function(that() + does() * lots()));
B = back(into(A) + the(A) fray());
I guess that's not the best example because A is used twice and you could just pop the 'A' calculation out into its own function, but sometimes it's nice to make code easier to read by breaking stuff into discrete steps that are easy to read later, rather than one huge line that does everything and then returns it.
The "A1 = ..." thing is a convention from Prolog, from which Erlang inherited its single-assignment variables (and many other relatively unusual features). People who write code with several levels of X1 = ..., X2 = ... vars are probably writing Erlang (or Prolog) with a thick imperative accent. It's like writing Python for loops using integer indexes - unless there's a specific reason to do so, it's a sign that the author is probably new to Python.
While putting intermediate steps of a calculation into variables can help clean up the code, if there's any sort of conceptual significance to that value, it's worth choosing a better name than X1. "ATC" (with "Avg. Triangle Count" as an end-of-line comment), for example, would actually mean something.
I think the cleaner way to describe that in functional terms is "giving a symbolic name to sub-expressions". There's nothing "stopping" or "starting" about the communication, you're just splitting out some part and labelling it. Obviously "A1" doesn't convey much, but somethign like "successFraction" does. I do this in imperative languages too.
the confusion between real limitations and pure syntax makes more sense when you realise he's using erlang. erlang's syntax is not exactly wonderful. but the reason you would use erlang isn't because it's a great functional language, but because it is a great distributed/reliable language.
the fact that the author is dropping erlang because of (largely) syntax issues, without apparently understanding what they are losing that other languages - imperative or functional - simply don't provide suggests they shouldn't be given that much weight...
I don't think he's dropping Erlang, just deciding that allowing small amounts of imperative / impure code to appear when it greatly simplifies things is a worthwhile trade-off. (His last post, about using the Erlang process dictionary, is along the same lines.)
Also, his archives are worth a read, particularly the Purely Functional Retrogames series. Excellent stuff.
What I didn't get about the SSA example was why CPS wasn't considered. It's literally the functional language equivalent.
As for running a static variable across to unrelated functions, I don't know any language where that's considered a good idea. Well maybe Fortran, but that really doesn't help the argument.
With a little bit of syntactic support from the language, you could easily write this in a natural, comfortable form, which will then automatically get compiled to the code above. In Haskell, it's pretty easy to do that with monads:
(I haven't touched Haskell in a while so there might be some minor syntactic issue here, but the overall point still stands). The point here is that the number of potential branches can be determined at compile time, which makes the problem syntactic.A bigger problem is when you're dealing with iteration, and you can't know at compile time how long the loop is. Then you can't unroll it, which means the compiler implementors can't use the technique above. But they can use a different technique - recursion, which can later be compiled back to iteration via tail calls optimization, removing any awkwardness. Again, with the do monad, it's trivial in Haskell.
Of course the question is, if you go through all the trouble to build up the beautiful, elegant mechanics that make it possible to write imperative code that gets compiled to purely functional code, that gets compiled back into imperative machine code for efficiency - what's the point? IMO the point is mathematical beauty - something you can't easily put a price tag on. There are some empirical benefits as well, but I'm not sure if they outweigh the troubles. Anyway, there is a great argument here, but the author only scratches the surface of the problem and doesn't dig nearly deep enough to get to the meat of the problem.