None. Standard interpreter automatically compiles modules to bytecode on the first import and stores them in .pyc files. Compiling specific files is also possible. I am assuming this project is simply an exercise on the author's part.
Well it might be possible for this one to implement a better bytecode optimiser and a better "stack flow", the standard interpreter only has a fairly simple peephole optimiser so it will e.g. store to locals to immediately reload the same local, or repeatedly load a name from the local or global scope rather than load it once and dup it:
I don't know how much value the optimised bytecode would bring though, likely not much outside of specific tight loops, and even then it's still only interpreted cpython, for little more cost you can integrate scipy solutions (e.g. numba) which'll likely have a much more significant impact.
Here's an example of why mborch's statement is true:
class A:
def __init__(self, value=0):
self.b = value
def __str__(self):
if self.b == 0:
global a
a = A(1)
elif self.b == 1:
import __builtin__, __main__
__builtin__.a = 2
del __main__.a
return str(self.b)
a = A()
def foo():
print a
print a
print a
foo()
This generates:
1
2
3
but it's a different 'a' in all three cases; the last isn't even from the module's namespace.
Also an opportunity for the Racket or Red crowds to try to show up Python in power of the language. Would be interesting to see the results. Haskell or Ocaml, too.
Do it in CakeML to have a more assured Python compiler.