It's pretty odd behavior, yeah. It's easy to work around (set the default to None and then set the actual default in the function body), but I don't know if I've _ever_ seen anyone want it to behave as it does now.
It does make sense to evaluate the default value for the parameter at the point of definition for serveral reasons.
First of all, the default value does not have to be a literal value, and if you wanted it to be evaluated at call time the function would need to capture all of the default values in a closure.
Under the sane semantics implemented in every major language other than Python, if you want to capture a snapshot of something at definition time, you can put it into a variable:
Don't touch stable_snapshot and everything is cool. This is the rare case. Of course
def foo(arg = wildly_changing_variable)
means we want the current value.
I don't understand your "closure" comments; either way, things are being lexically closed. It's a question of when evaluation takes place, not in what scope or under what scoping discipline.
In fact, Python's treatment requires the implementation to have a hidden storage that is closed over; the equivalent of my stable_snapshot variable has to be maintained by the implementation. The definition-time evaluation has to stash the value somewhere, so to that it can use that one and not the current.
You could easily generate that behavior manually though if the default were the other way, and it would target the common case instead of the rare case.
> the default value does not have to be a literal value
The default isn't a literal value when it is [], by the way.
If [] were a literal, then this would not be safe or correct:
def fun():
local = []
local.append(3)
The fact is that whenever [] is evaluated, it produces a fresh list each time. It's a constructor for an empty list, exactly like set() for an empty set. It just has slicker syntactic sugar.
Python just calls that a literal because it looks like one.
Looking is being, in Python. Except for all the pitfalls.