One point of view of a trampoline is that you are implementing a small virtual machine (or, in the terminology of SICP and I believe the original Scheme paper, a "register machine") with two instructions: call and return. The virtual machine provides the programmer with a very controlled goto statement, known as a tail call.
The more general principal is that you can introduce small virtual machines to extend a language with features not present in the original one. In an extreme case, you can extend C with a virtual machine that implements all of Common Lisp. Much less extreme is an asynchronous event loop using generators. (A language isn't just its syntax.)
Separate topic:
Comment of the first part of the series:
>Alternative title: I wish Python had tail-call elimination.
I really like Python a lot and there are many features that could be added to Python, but on the other hand, Python is just nice as it is, as long as what you're trying to do fits into its capabilities. There's also the case of some of those potential features threatening to go against the Zen of Python (particularly "There should only be one way to do it" and "If the implementation is hard to explain, it's a bad idea.")
If you are going to require high-performance * , or things that collide with the Global Interpreter Lock, etc, then simply Python is not the best choice. I'd also liked Python to have Tail Call Elimination, along with many other features I would love Python to have, but for those I simply use Common Lisp, which does have them (plus a ton of other features), however it will not produce code as beginner-friendly as with Python.
In Common Lisp we can use macros to bring about a reasonable tail calling system without relying on TCO support in the implementation.
In the Lisp source file below, I provide a tlet operator whose syntax resembles labels, but which compiled to a tagbody in which the "functions" are just labels.
There is also deftail for tail calling among global functions, which is based on dynamic non-local returns and a dispatch mechanism.
Really fantastic point. I'm not a heavy user of Python. But, when I need to quickly prototype something its simplicity - i.e. one good way to do something, although this is a little exaggerated - makes the development very quick. Given my late start with Python, I work only in 3.x so I don't even need to look at search results with 2.x solutions in them: the range of solutions is narrowed further.
I'm not sure exactly why people bring up performance when tail call elimination is mentioned. Maybe because sometimes it's called "tail call optimization"? Besides replacing a call with a jump, it's more of an "optimization" in the sense that it allows an optimizing compiler to see and optimize a loop. However, CPython is not an optimizing compiler, and it would barely be an optimization.
Rather, tail call elimination is a feature that allows for a very controlled goto statement, and even a computed goto. However, "continue" and "break" seem to work well enough for most cases. Perhaps the strongest objection to having TCO is that stack traces become more inscrutable.
>I'm not sure exactly why people bring up performance when tail call elimination is mentioned.
Yes, you are correct, however i was mentioning TCO and performance as separate topics. This nonwithstanding, consider that TCO allows writing recursive functions without utilizing call stack space, and this might have some performance impact as well.
> There's also the case of some of those potential features threatening to go against the Zen of Python
It seems like "Zen of Python" is applied inconsistently, especially the "one way to do it" rule. Python is horrid about "one way to do it"--list comprehensions vs loops, decorator syntax vs `x = decorate(x)`, try/except/finally vs with, etc etc. In particular, if we are going to force a choice between iterative and recursive based on "one way to do it", then I would vote recursion + tco. Of course, this is a false dichotomy and we could have the best of both worlds.
> if you are going to require high-performance * , or things that collide with the Global Interpreter Lock, etc, then simply Python is not the best choice.
Unfortunately, people rarely realize their performance requirements until they're neck-deep in Python. Basically if you ever, ever, ever think you might need decent performance, there are performant languages that beat Python at its own friendliness/maintainability game (Go comes to mind).
I don't see a conflict between loops, list comprehensions and other shortcuts - use the syntax sugar where it makes the code simpler/easier to read, otherwise stick to plain loops.
My own rules for "list/dict comprehension" versus "loops" versus "generator expressions" are the following:
* use a loop where you are interested only in the side-effects or where it will make the code easier to read
* use generator expressions where you don't really need to instantiate a list, for example when you are just iterating over the result and throwing it away.
* use list or dict comprehensions where you need to build and save a list/dict and the resulting code does not resemble a lisp program (where the choice does not make the program harder to read).
I used to use "visit" flags in relational projects to incrementally scan trees. You start by putting the top node in a node inventory table, scan it, and add any new nodes (branches) to the inventory table, with the "visit" flag initialized with False. (Index the visit flag.) You keep querying and scanning nodes until all the visit flags are True. Pseudo-Code:
1. Add first node to inventory, set node.visit=false.
2. Query for 1st of any nodes with visit=false. (Order doesn't really matter in the end.)
3. If none found, stop.
4. If the found node has branches, add them to inventory with visit=false.
When you say "relational projects" and "query for", you don't mean that you were using an RDBMS? Because, in that case, this pseudo-code represents a gigantic waste of time and resources.
As usual, this kind of misses the point. Trampolining for TCO/TCE is fun, but not really worth it. Rather it's cool for concurrency and scheduling, like in Twisted. Dabeatz had a great tutorial in 2009 (using Py 2.5+ - oh my! :-)): http://www.dabeaz.com/coroutines/
The more general principal is that you can introduce small virtual machines to extend a language with features not present in the original one. In an extreme case, you can extend C with a virtual machine that implements all of Common Lisp. Much less extreme is an asynchronous event loop using generators. (A language isn't just its syntax.)