I thought you know Lisp? Now you are surprised that Lisp often looks up functions via symbols -> aka "late binding"? How can that be? That's one of the basic Lisp features.
Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.
At no point in time did I claim to know lisp well. I stated my familiarity at the outset. But what you all did was claim to know a lot about every other interpreted runtime without a grain of salt.
>Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.
But compilers I am an expert in and what you're implying is impossible - either you have dynamic linkage, which means symbol resolution is deferred until call (and possibly guarded) or you have the equivalent of RTLD_NOW ie early/eager binding. There is no "optimization" possible here because the symbol is not Schrodinger's cat - it is either resolved statically or at runtime - prefetching symbols with some lookahead or cabinet is the same thing as resolving at calltime/runtime because you still need a guard.
What you're missing is that, unlike any other commonly used language runtime, compilation in CL is not all-or-nothing, nor is it left solely to the runtime to decide which to use. A CL program can very well have a mix of interpreted functions and compiled functions, and use late or eager binding based on that. This is mostly up to the programmer to decide, by using declarations to control how, when, and if compilation should happen.
It should also be noted that by spec symbols in the system package (like + and such) should not be redefined. This offers “unspecified” behavior and lets the system make optimizations out of the box.
Outside of that you can selectively optimize definitions to empower the system to make better decisions at the cost of runtime protection or dynamism. However these are all compiler specific.
To be fair, any dynamic language with a JIT will mix interpreted and compiled functions, and will probably claim as a strength not leaving to the programmer the problem of which to compile.
You are incorrect; optimizations are possible in dynamic linking by making first references go through a slow path, which then patches some code thunk to make a direct call. This is limited only by the undesirability of making either the callling object or the called object a private, writable mapping. Because we want to keep both objects immutable, the call has to go into some privately mapped jump table. That table contains a thunk that can be rewritten to do a direct call to an absolute address. If we didn't care about sharing executables between address spaces we could patch the actual code in one object to jump directly to a resolved address in the other object. (mmap can do this: MAP_FILE plus MAP_PRIVATE: you map a file in a way that you can change the memory, but the changes appear only in your address space and not the file.)
Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.