Hacker News new | past | comments | ask | show | jobs | submit login
Codon: A high-performance Python-like compiler using LLVM (github.com/exaloop)
317 points by arshajii on Dec 8, 2022 | hide | past | favorite | 179 comments



Anybody claiming it's almost Python is kidding themselves. This compiler needs to do static type checking. This is inherently impossible in Python. Not just because of some obscure corner cases that nobody uses. It's baked into the language itself.

Reality-check: Why do you think type hints and type checkers like mypy and pyright take such a long time to get going and even they are not there yet? If this was all so easy with just ignoring some obscure rarely used features then mypy would work with essentially no type annotations, all just automatic inferences. Anybody who has tried to work with type annotations in Python knows how hard this is.

So, those guys are quite obviously overselling their product. I can understand it, academic life is hard, and once you've completed your Ph.D., what can you do. You need to stand out. But these claims don't pass the smell test, sorry.


Just tried: it hangs on a numpy import. Hell it hangs on an "import time" module. If I cannot reinterpret code that is already written, then I might as well just rewrite code in a language that is better suited. Saying I can 'compile' Python code like this does indeed look like an oversell and a half.

I mean... if you had a 'compiler; for python that looked at my code at runtime- including imports- and all my current input and.... given the data types it sees and nothing more, do type-inference and recompilation down to LLVM and then to my machine code, while taking things that were already calling compiled modules (like numpy) and keeping them separate subroutines and thus only operating on the 'slow' parts of my code... with the speedups therein.. I'd be sold.

Of course, I think I basically just described Julia.


If I can write (mostly) python code and get it to run on my GPU they can oversell this all they want.

I have a couple of projects I’ve been wanting to tackle but put off because I like python but it wouldn’t be a very good fit due to performance reasons. Now I get a whole new herd of yaks to shave.

Plus, extensible compiler? Who doesn’t want linq in python?


Look at numba, which is not quite but sometimes good enough :)


What about GraalVM? Are they overselling, too?


No, their python support repository readme explicitly tells it’s highly experimental.


Thanks a lot for all the comments and feedback! Wanted to add a couple points/clarifications:

- Codon is a completely standalone (from CPython) compiler that was started with the goal of statically compiling as much Python code as possible, particularly for scientific computing use cases. We're working on closing the gap further both in what we can statically compile, and by automatically falling back to CPython in cases we can't handle. Some of the examples brought up here are actually in the process of being supported via e.g. union types, which we just added (https://docs.exaloop.io/codon/general/releases).

- You can actually use any plain Python library in Codon (TensorFlow, matplotlib, etc.) — see https://docs.exaloop.io/codon/interoperability/python. The library code will run through Python though, and won't be compiled by Codon. (We are working on a Codon-native NumPy implementation with NumPy-specific compiler optimizations, and might do the same for other popular libraries.)

- We already use Codon and its compiler/DSL framework to build quite a few high-performance scientific DSLs. For example, Seq for bioinformatics (the original motivation for Codon), and others are coming out soon.

Hope you're able to give Codon a try and looking forward to further feedback and suggestions!


I can see myself using Codon for projects in the future. One thing that concerns me, though, is "automatically falling back to CPython in cases we can't handle". Sometimes, I want the compilation to fail rather than fall back, because sometimes consistent high speed is a requirement. Please keep that in mind as you design that part.

Great work so far!


Im curious. If codon can compile a python script, why can it not compile a pure python library?

What technical limitations does an import or 3rd party add that a script wouldn't have?


NumPy, PyTorch, TensorFlow and many other widely known third-party libraries are actually native code that interact with CPython directly.


I'm very interested to try Codon, though I note there are no Windows binaries. Do you think building from source on Windows would be straightforward?


Excellent job. I can already see this being much more flexible than Numba and much more elegant/easy to use than Cython. Please keep it coming:)


Since Codon performs static type checking ahead of time, a few of Python's dynamic features are disallowed. For example, monkey patching classes at runtime (although Codon supports a form of this at compile time) or adding objects of different types to a collection.

This seems like a very different language from Python if it won't let you do:

    [1, 'a string']


I welcome this change. I am willing to sacrifice a few Python features for the sake of speed.


I have been using Python since 25 years, and never needed that one.


You've never had to read arbitrary json?


I just did a quick check (apparently DuckDuckGo has a built in JSON validation function lol love it) and `[1, "foo"]` does pass their JSON validation.

Not surprised but I did want to confirm.


Am I missing something, why wouldn't a non-homogeneous array literal pass?


Because mixing types in an array is stupid enough that I thought it might just not be valid.


Mixing types in an array/list is not remotely stupid. How exactly would you suggest expressing the lack of a value in a series of numbers without None/null? There's your mixed types.

Don't let weird dogma get in the way of practicality.


That's an Optional[int]. It's not "mixed types", it's a union type.


That is merely a detail of how the typing module has decided to set up convenient aliases. Treated by the language (and likely compilers) as different types.


JSON has a `null` value already, and null is its own special case in most languages where `null` is a universal type.

Otherwise you tag your data so that it's an array of objects.


You never implemented a quick polish notation calculator that uses this data structure? [1,2, '+']


I would use two stacks, one of which stores numbers and the other stores operators.


Why? Just to have no unions of types?


In 25 years you’ve never once created a list with more than one type of object in it?


I mean, it depends on what you mean by “type”. A list of some Protocol type (what other languages call “interfaces”) is still a homogenous list even though the concrete types are heterogeneous. This is almost always what you want when you’re thinking of “a heterogeneous list”.


It's not unheard of to have unions and a couple of if isinstance.

In fact it's why in python they even have the | operator between types nowadays.


I have also been using python a long time and i honestly can’t remember a time i used mixed type list.


In my code this comes up often, e.g. when I use tuples instead of namedtuple or a dataclass.


Tuples aren't lists though. Tuples are really just structs with indexes instead of names.


Never have to handle/forward function calls with arbitrary arguments? What do you think *args is?


a tuple


I use mixed lists all the time to store shit, but I'm also a total python newb that really doesn't know better.


Maybe a list with ThingObject or None, but my lists are usually homogenous.


thats just homogenous Sequence[Optional[T]], though.


Yeah, that's a better way of putting it :) 90% of the time, I'm something like Sequence[T], but I'm sure I've used Sequence[Optional[T]] a couple of times. I mean, I could drive donuts in the Piggly Wiggly parking lot at 3a, I just don't, and the same for heterogenous lists.


And everything is a subtype of Any. The usual 'static typing' of dynlangs.


well list[int|str] is as well


I’ll second that. I’ve been doing python for a while and haven’t used the mixed type list. I’ve actively avoided doing something like that. The situation doesn’t come up often.


Same here. If I needed that I might use a tuple.


And then it crashes when you convert it to JSON.


I just use an extension wrapper around boost::property_tree for my json (and xml) needs. Way faster than the built in json support and does automagic type conversion so I don’t have to worry about it.

Now, I’m not running at Web Scale™ but pure python was slow enough to be annoying.


Huh, thinking about it I haven't in 9 years either


With type hints you would model this as a Union type, i.e., Union[int, str]. This is perfectly legal with mypy and other Python type checkers.


Not "Python", but if you are doing that in Python, then you are doing it wrong.


I googled 'codon and django' and unsurprisingly found a lot of bioinformatic stuff. I tried to add language and compiler to no avail. The only query that got results was codon python compiler. Overall I think it's a name that clashes with a lot of DNA/RNA research.

While searching I found a paper from 2021 about Codon [1]. The author is not in the About page of Exaloop [2] but the supervisor of that thesis is there. From the "Future Work" section:

> we plan to add union types and inheritance. On the IR side [Intermediate Representation], we hope to develop additional builtin transformations and analyses, all the while expanding the reach of existing passes. As far as library support, we plan to port existing high-performance Python libraries like NumPy [...] to Codon; this will allow Codon to become a drop-in replacement for Python in many domains.

Maybe they already did.

[1] Codon: A Framework for Pythonic Domain-Specific Languages by Gabriel L. Ramirez https://dspace.mit.edu/bitstream/handle/1721.1/139336/Ramire...

[2] https://exaloop.io/about.html


Having worked in the DNA analysis space and admittedly haven't read the article... my first thought was that Codon was some python library for DNA stuffs that gets compiled via LLVM for performance.


Very interesting--Codon can generate standalone executables, object files, and LLVM IR [1]. It has strong typing for functions and argument return values [2]. Syntax looks more compact than Cython.

Looking forward to giving Codon a try!

[1]: https://docs.exaloop.io/codon/general/intro

[2]: https://docs.exaloop.io/codon/language/functions


Unfortunately stuff like this never makes it to the upstream. And i am afraid to ask why. We had pypy for years, but never got merged with python. That is why there are still minor incompatibilities between pypy and "The Python", so it's not that useful as it might have been if it got merged with cpython at some point.


I got a massive jump in performance when moving from Python 3.8 to 3.10 (over some function call optimizations I think, based on the project). And 3.11 got even better (up to 50% faster on special cases, and 10~15% on average) with respect to 3.10. Python 3.12 is already getting even more speedups and a there's a lot more down the road[0].

But Python core developers value keeping "not breaking anyones code" (Python 3 itself was a huge trip on that aspect and they're not making that mistake again), that's why things may seem slow on their end. But work is being done, and the results are there if you benchmark things.

[0] See https://github.com/faster-cpython/ideas/blob/main/FasterCPyt... however that's over a year old already and I'm sure I've read/heard more specifics


Python 3.11 broke a lot of stuff in Debian, as did earlier versions of Python 3:

https://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=python3.9&...


I got performance regression going from 3.10 to 3.11

https://ltworf.github.io/typedload/performance.html


While it was still on RC builds I tested it on a few projects I usually run and I got between 0 and 15% faster wrt 3.10, without any regression. But sure, every project is different and it looks like you've bumped into some with your project.

If it's not too big, you might as well just leave things as they are and wait for 3.12 to see what changes then. Apparently the changes will be bigger, so it could be be much better than 3.11 (without being much worse, hopefully!).


same. for my work project, 3.11 was performance-equivalent to pypy 3.9


Interesting. What is the status of the GIL these days?


I think 3.12 will get subinterpreters, which will allow you to have multiple interpreters, each on its own thread, sharing the process memory space. So kind of like a stopgap in between having real multithreading and pythons current situation. I'm not sure what to think of it so far, so until there's some beta or RC build to test, I don't think I'll be able to form an opinion.


Exactly the same as it ever was, but if you're doing cpu-heavy stuff on python objects you're doing it wrong because they'll never be Fast.


There is a branch of the 3.9 release that removed the GIL created by Sam Gross that you can read about here: https://gavincyi.github.io/2022-10-03-does-sam-gross-nogil-c...

There is some work to bring it up to 3.12 and some resistance to merge it into 3.X because of the impact on extension modules (they all have to be recompiled and in some cases changed a bit).

If you are interested in it, reach out to Sam. He has done a pretty impressive piece of engineering work.


It's not like you can just "merge" pypy into python, they are totally different implementations. CPython is written in C and PyPy is written in RPython which is a subset of the python language that gets compiled, into into an interpreter with JIT support. You can actually write an interpreter for any language using RPython and their toolset, for example Ruby https://github.com/topazproject/topaz


Morever, a wonderful aspect of standard CPython is that you can compile it from source on a huge range of architectures in less than 5 minutes. Building Pypy from source is more difficult, and Pypy is significantly less portable (e.g., there is no viable WebAssembly version of Pypy).


No need to be afraid. The Python C extension API makes it very hard to make a JIT work well because of how it is implemented. C extensions are also part of why Python is so popular in the first place. If everybody wrote pure Python (like they write pure JavaScript), then the reference implementation would probably look like Pypy.


power of Python is in ecosystem of libraries, not only Python syntax.

Without the ecosystem of libraries, I am afraid use cases for Codon will be very very limited. Because Python developers (just like Node) got used to thinking: Need to do X? Lets see if I can pip install library that does it.

Ultimately, python is like super flexible glue between ecosystem of libraries that lets anyone build and prototype high quality software very quickly


If Codon becomes similar enough to Python, it will be trivial to port Python libs to it, thus opening Codon to the vast Python ecosystem.


What's the story with Python libraries that have c-modules/binary parts to it? Would those work? If not, then the previous comment stands, IMHO.


The only trivial thing in software is hello world, anything more complicated or useful for end users is usually far from being trivial, in my experience.


Perhaps the way forward for Codon in terms of wider adoption would be maintaining a list of libraries that are fully Codon compatible, thereby encouraging devs to aim for cross-compatibility (which would likely naturally exclude usage of a lot of the slower Python features in turn making the libraries faster for both Codon and regular Python users)


Just out of curiosity: Why is it possible to compile Common Lisp Code (or Scheme, or Clojure) to high-performance native or jit-compiled code, but not Python? It is said that "Python is too dynamic", but is not everything in Lisp dynamic, too?

And none of these languages is less powerful than Lisp, lack Unicode support, or whatever, so this can't be the reason.


It is possible to JIT compile Python just fine. There are projects like PyPy that have been doing this for a long time [1]. The reason these alternative projects never take off is because many of Python's most used libraries are written against CPython's C API. This API is giant, and exposes all of the nitty gritty implementation details of the CPython interpreter. As a result, changing anything significant about the implementation of the interpreter means those libraries no longer work. In order to not break compatibility with the enormous amounts of packages the internals of the CPython interpreter are mostly locked in at this point with little wiggle room for large performance improvements.

The only real way out is to make Python 4 - but given the immense pain of the Python 2 -> 3 transition that seems unlikely.

[1] https://www.pypy.org


To be fair the 2 -> 3 upgrade path was terrible. And there wasn't a killer feature in 3 which was terrible. And the tooling around the upgrade was terrible. Basically the python devs completely botched it -- which was terrible.

So one thing of golang that is nice is that go 1.19 compiler will compile go 1.1 just fine, and people can iterate from 1.1 to 1.19 in their own time -- or not if they choose not to. It would not be that hard for golang v2 to continue to allow similar compilation of old code.


this hypothetical 3 -> 4 upgrade would run into a lot of the same issues.

Presumably the killer feature here is that it would be faster. Or at least have the potential to be faster because of less constraints on the c API. But for a lot of python applications, speed isn't all that important. After all, if it really needs to be fast, you probably aren't doing it in python (unless python is just a thin wrapper around a lot of c code like numpy).

And for changes to the C API, it would probably much, much harder, maybe even impossible to automate migrating libraries to the new API. The only way I could see this working well is if you had some kind of shim library between the old API and the new API, but that adds constraints on how much you can change the API, and might add additional overhead.


The HPy project [0] looks like a promising way out of this.

[0] https://hpyproject.org/


It’s because Python object attributes can change any time, as they are accessed dynamically. Nothing can be inlined easily. The object structure is pointer heavy.

Here is some old 2014 post:

http://jakevdp.github.io/blog/2014/05/09/why-python-is-slow/

As other commenters pointed out, some of these Python features, which are unused 99,99% time, could be sacrified for additional speedup by breaking backwards compatibility.


That common excuse doesn't fly in the face of Smalltalk, Lisp, Scheme, SELF, Prolog, JavaScript, Lua.

It is more a matter of wanting to have a JIT in the box or not.


The same applies to Common Lisp. Maybe it's because type deduction is more difficult in Python than in CL?


The demand for compiled Python hasn't been as high as the demand for other languages, so the number of people who have worked on it is much smaller than the number who have built JITs for ECMAScript and others. Python has long been fast enough for many things, and where it isn't, it's easy to call C code from CPython.

Python does have lesser-used dynamic capabilities that probably don't exist in Common Lisp. Those capabilities make it difficult to optimize arbitrary valid Python code, but most people who need a Python compiler would be happy to make adjustments.


Having worked on this for a while, one way that might be helpful to understand this is that Python jits (such as the one I work on, Pyston) do in fact make Python code much faster, but the fraction of the time that is spent in "Python code" is only about 20% to begin with, with the other 80% being spent in the language runtime.

For example if you write `l.sort()` where l is a list, we can make it very fast to figure out that you are calling the list_sort() C function. Unfortunately that function is quite slow because every comparison uses a dynamic multiple-dispatch resolution mechanism in order to implement Python semantics.


If JavaScript can be compiled effectively, and V8 strongly suggests it can, it's hard to see why python couldn't be.


Who would create a language that only has ASCII strings in this day and age?


Here is the quote of the thing you are referring to:

> Codon currently uses ASCII strings unlike Python's unicode strings.

Note the word "currently." Implementing this while also tracking the constantly evolving Python language through its various versions is a lot of work. They apparently prioritizing other things over this particular aspect.


Someone who's just trying to get something up and running. Unicode is complicated.


after reading it's not a python compiler but a compiled language based on the python syntax


One of the faq things refers in passing to integers being 64bit instead of arbitrary precision. That's a bit more fundamental than some cpython modules don't work. Haven't found a language reference yet.

edit: it's statically typed ahead of time - that feels like something that needs a detailed description of what it's doing, given the baseline of like-python


I wonder if the differences will cause any real compatibility issues with existing Python libraries?


It would cause major issues to libraries for mathematics (such as sympy or sagemath) that assume integers are arbitrary precision. Large integers are common in number theory and cryptography, where people also care very much about performance.


Can we change the title to say Python-like or something similar? Based on the comments so far, it seems that the detail that it compiles its own Python-inspired language, not actual Python, is lost on many.

EDIT: A list of differences here: https://docs.exaloop.io/codon/general/differences

The summary minimizes with "many Python programs will work with few if any modifications", but it actually looks like a substantially different language.


From the README, for those who didn't scroll that far:

"While Codon supports nearly all of Python's syntax, it is not a drop-in replacement, and large codebases might require modifications to be run through the Codon compiler. For example, some of Python's modules are not yet implemented within Codon, and a few of Python's dynamic features are disallowed. The Codon compiler produces detailed error messages to help identify and resolve any incompatibilities."


That list actually seems genuinely pretty minimal. Reading your comment I was expecting a long major list of changes, but it's only 3 things, most of which seem relatively unlikely to impact most programs, with the possible exception of dictionary sort order.


Really? The lack of Unicode strings immediately disqualifies this for most things I've worked on the last few years. No emojis, no diacritics, no non-US users. Ok for internal tools for American companies with an older workforce, I guess, but I wouldn't use this for anything that takes input from the general public (e.g., customers).


I suppose I'm thinking more about data science / engineering oriented things, since that's what I tend to use Python for.


The list of small things are for data structure. However, the language is a lot less dynamic than Python:

> Since Codon performs static type checking ahead of time, a few of Python's dynamic features are disallowed. For example, monkey patching classes at runtime (although Codon supports a form of this at compile time) or adding objects of different types to a collection.

While monkey patching is maybe not done so much in Python (outside of unit testing), adding objects of different to a collection is definitely a common operation!


From what I understand, this will be possible in the future with implicit union types. Wouldn't work with _arbitrary_ types, but with a set of types that can be computed at compile time (my guess is that this is possible in most real-world cases).


That list is minimal. Elsewhere there's the no heterogenous lists and no biginteger restrictions, and it looks like import doesn't work either. Presumably no heterogenous dictionaries either - so not only unordered, but also simply typed.


Read the entire page. Those three bullet points aren't the extent of it. This is like the difference between Ruby and Crystal; the same syntax, similar culture, but they're fundamentally different languages.


> …with the possible exception of dictionary sort order.

Which is an implementation detail that is not guaranteed by the language standard.

Porting code from 2 to 3 made me have to use a sorted dict because the code relied on the insertion order (metaclass magic operating on the class dict) but when they revamped the dictionary implementation I could do away with that fix. Until they come up with a more efficient dict and break everyone’s code again.

Does make me want to dust off the old spark parser and see what this can do with it.


For py2many, there is an informal specification here:

https://github.com/py2many/py2many/blob/main/doc/langspec.md

Would be great if all the authors of "python-like" languages get together and come up with a couple of specs.

I say a couple, because there are ones that support the python runtime (such as cython) and the ones which don't (like py2many).


Python is a language with several implementations (PyPy, CPython, JPython). Not all python programs work in all of those implementations.

So, I think this might qualify as much as a python implementation as PyPy.


I don't think python without heterogenous lists and dictionaries is really python?


I can't think of a time I ever needed to do such a thing, and I've written many thousand lines of python. Python supports OOP, so classes will get you quite far in this regard.


Wouldn't that exclude (most) nested lists?


What do you mean by nested lists? A list of lists, that's a list with a single type.


I was thinking more a tree-like structure, with lists or values.


You convinced me to change the description of a "Python-like language" I'm working on to say "Python-like" instead of "Python": https://www.npmjs.com/package/pylang


The main challenge with those three issues, to me at least, is that it cannot even tell you, "yep, you need minor changes for Codon to work." It'll just work until it doesn't at runtime because your violates one of those three assumptions. So to migrate, we would have to go and figure out all the possible cases those things matter and guard against them. Not really unpalatable, just not so much a nice migration path.

Also, I'm not actually sure what they mean by internal dict sorting. Do they mean insertion order stability?


Ok, we've liked it in the title above.


> Since Codon performs static type checking ahead of time, a few of Python's dynamic features are disallowed. For example, monkey patching classes at runtime (although Codon supports a form of this at compile time) or adding objects of different types to a collection.

This may or may not be the biggest concerns.


Pythonic?


What's the difference with mypyc [0] ? It also compiles Python to native code.

[0]: https://github.com/mypyc/mypyc


the last commit to it's repo was 2 years ago.


> The mypyc implementation and documentation live in the mypyc subdirectory of the mypy repository. > Since mypyc uses mypy for type checking, it's convenient to use a single repository for both. > Note that the mypyc issue tracker lives in this repository! Please don't file mypyc issues in the mypy issue tracker.

See https://github.com/mypyc/mypyc/blob/master/show_me_the_code....


What is the difference between Codon and Pypy other than Codon not being targeted as a drop-in replacement for Cpython?



Their benchmarks (https://exaloop.io/benchmarks) show that Codon is much, much faster than pypy. I also just tried some microbenchmarks with their fib example (iterated many times with higher parameters) and got similar results. It's unfortunate for now that this isn't open source, but it's really valuable to demonstrate to us Python lovers what's possible using LLVM!


Their benchmarks are not to be trusted (after reading the source).

- They cheat, they rewrite code to use coden specific features to "win" (ie, parallelization and GPU optimizations)

- They don't warm up. They are simply running their competition directly rather than allowing any sort of warmup. (In other words, they are measuring cold boot and startup time)

Now, if they want to argue about startup time or whatever mattering for performance then fine. However, the representation of "20x faster!" is simply deception.

TBF, they are upfront about cheating

> Further, some of the benchmarks are identical in both Python and Codon, some are changed slightly to work with Codon's type system, and some use Codon-specific features like parallelism or GPU.


Thanks for doing the work to point all of this out. "Benchmarketing".


Do people not know about numba which unlike this project is FOSS and integrates with numpy???


And Numba is actually CPython, unlike this which is just "Python-like".

There's also Nuitka as yet another alternative.

Or you're going to use a "Python-like" compiled language, consider using Nim.


Doesn't it require you to annotate every function if you want to compile to a binary? That makes it more like Cython than this. https://numba.readthedocs.io/en/stable/user/pycc.html#overvi...


Numba doesn't market itself very well.


For context, Numba also uses the LLVM and it works with Python code via decorators.


Does it make sense to use Numba with Django / Flask / FastApi?


If you're trying to do intense numerical computations on the backend...


This gives the same feeling as AssemblyScript: it says it is one language, up to the point it isn't. That may make it easier for some people, but feels so uncertain. Both have a very slim set of articles in place of a proper manual; they lean on their parent languages.


Codon is a high-performance Python compiler that compiles Python code to native machine code without any runtime overhead

Further down:

Codon is a Python-compatible language, and many Python programs will work with few if any modifications:


> "...typically on par with (and sometimes better than) that of C/C++"

What makes it faster than C++?

I see this in the documentation but I am not sure it helps me (not an expert):

> C++? Codon often generates the same code as an equivalent C or C++ program. Codon can sometimes generate better code than C/C++ compilers for a variety of reasons, such as better container implementations, the fact that Codon does not use object files and inlines all library code, or Codon-specific compiler optimizations that are not performed with C or C++.


JIT.


JIT can be faster than a static compiler if it takes advantage of runtime feedback, but that's not the case here:

> While Codon does offer a JIT decorator similar to Numba's, Codon is in general an ahead-of-time compiler that compiles end-to-end programs to native code.


It can be, but if you're using PGO, the performance gains from JIT are a lot less significant and you lose the compilation overhead at runtime that you have with JIT.


Ugly, confusing naming choices: ``@par`` instead of ``@parallel``.


How do you feel about `def` instead of `define`?


Abbreviations are good to the extent that they're commonly used. It's a bit of a chicken-and-egg problem. At the time Guido picked `def`, I might have argued with him. Now, it's the standard.

If I were writing my own language, I might choose `let` instead of `def`. For example, `let x = 1`.


and abs instead of absolute


would love to see actual benchmarks


Don't have anything significant, but giving this a quick test with some of my advent of code solutions I found it to be quite a bit slower:

   time python day_2.py             

   ________________________________________________________
   Executed in   57.25 millis    fish           external
      usr time   25.02 millis   52.00 micros   24.97 millis
      sys time   25.01 millis  601.00 micros   24.41 millis


   time codon run -release day_2.py 

   ________________________________________________________
   Executed in  955.58 millis    fish           external
      usr time  923.39 millis   62.00 micros  923.33 millis
      sys time   31.76 millis  685.00 micros   31.07 millis


   time codon run -release day_8.py 

   ________________________________________________________
   Executed in  854.23 millis    fish           external
      usr time  819.11 millis   78.00 micros  819.03 millis
      sys time   34.67 millis  712.00 micros   33.96 millis

   time python day_8.py             

   ________________________________________________________
   Executed in   55.30 millis    fish           external
      usr time   22.59 millis   54.00 micros   22.54 millis
      sys time   25.86 millis  642.00 micros   25.22 millis
It wasn't a ton of work to get running, but I had to comment out some stuff that isn't available. Some notable pain points: I couldn't import code from another file in the same directory and I couldn't do zip(*my_list) because asterisk wasn't supported in that way. I would consider revisiting it if I needed a single-file program that needs to work on someone else's machine if the compilation works as easily as the examples.


I would guess the bulk of the time is being spent in compilation. You might try "codon build -release day_2.py" then "time ./day_2" to measure just runtime.


Good catch! Here's updated runs:

   time python day_2.py

   ________________________________________________________
   Executed in   51.26 millis    fish           external
      usr time   23.38 millis   48.00 micros   23.33 millis
      sys time   21.88 millis  617.00 micros   21.26 millis

   time day_2

   ________________________________________________________
   Executed in  227.06 millis    fish           external
      usr time    8.17 millis   70.00 micros    8.10 millis
      sys time    6.69 millis  708.00 micros    5.98 millis

   time python day_8.py

   ________________________________________________________
   Executed in   53.63 millis    fish           external
      usr time   22.11 millis   51.00 micros   22.06 millis
      sys time   24.63 millis  714.00 micros   23.91 millis

   time day_8

   ________________________________________________________
   Executed in  115.89 millis    fish           external
      usr time    5.83 millis   92.00 micros    5.74 millis
      sys time    4.59 millis  856.00 micros    3.73 millis
Now codon is much faster than Python.


It looks like you are compiling and running. Try compiling to an executable and then benchmark running that


We do have a benchmark suite at https://github.com/exaloop/codon/tree/develop/bench and results on a couple different architectures at https://exaloop.io/benchmarks


Why are do the C++ implementations perform so poorly?


My guess for word_count and faq is that the C++ implementation uses std::unordered_map, which famously has quite poor performance. [0]

[0] https://martin.ankerl.com/2019/04/01/hashmap-benchmarks-01-o...


Looks like what taichi(https://github.com/taichi-dev/taichi) is doing, does this support CUDA yet?

additionally how does it compare to numba the compiler for python?

looks like python's performance on ML and AI field will only get stronger.


Things like this are always going to be another point of failure when trying to get something to work. Now when your python code crashes, there's a new reason something could be going wrong, in addition to the countless other reasons.


The number one question for me would be, is it interoperable with existing Python and libraries?


How does this relate to

https://cython.org/

?

Would it be possible to write performance-sensitive parts of a Python system in Codon and link that to a CPython or PyPy runtime that supports more dynamic features?


Cython takes python-ish code and compiles it to C for use as CPython C extensions. This compiles directly to machine code without the need for CPython, as far as I can tell.


Seems there is not bytearray implemented, can't test further :(


Free for non-production use... it's a "no" for me.


It's also confusing... I mean, what does "non-production use" mean anyway? Does it mean "non-commercial use"? Or "testing/debug/staging environment"? Or "does not produce any valuable output"...?


According to this article https://perens.com/2017/02/14/bsl-1-1/ about the Business Source License, the intention conveyed by the word "production" for that license is use "in any capacity that is meant to make money".


> "in any capacity that is meant to make money"

I'll note that that's the definition of "commercial".


What I want to know is: "can I add Codon to a site like https://cocalc.com that I host as long as users of Codon explicitly agree to only use it in a way that is compatible with the license?" I have absolutely no idea if that would be allowed by the rules or not.


I think that's supposed to apply to you, not your users. My understanding is that you pay if you make money using it. I can definitely see how you can easily interpret it in many other ways though.


It means if a company uses this program, they can't use it without paying the developer.


> It means if a company

”company” is not a particularly well-defined term.


But companies can still experiment with it without paying. IE, you can integrate it and run benchmarks to see if it would actually help your pain points.


Surprised this sentiment is so common. It's like, do you want open source devs to work for free forever? Even Redis had to pivot to a business license.

I'm not sure what the terms are in this particular case, but in general, wanting someone to pay if they're deploying it to lots of customers seems reasonable.


I understand both perspectives.

On the one hand, in the same way any sort of fee (even $1) is a big impediment to adoption over free, so is the idea that "if I try this out and like it, I'll now be stuck with whatever costs they stick me with now and in the future". In a fast moving world, open source is just easier because if you change your mind, you haven't wasted money. In addition, there are only so many costs a project can afford and it is hard to know as it progresses where those will popup, so you avoid when you can.

That said, developers need to eat, and it is easy to appreciate the fact that they are letting you see the source, play with it, but pay if you want to use. I also fully support their right to license their software as they see fit.


> Even Redis had to pivot to a business license.

Wrong, Redis has a BSD-3 license: https://github.com/redis/redis

Optional add-ons to Redis may have non-free licenses.


A lot of developers don't have/control budgets and might not have the clout required to get a tool like this approved.

I agree that the devs of these tools need to be paid, but that particular avenue presents some roadblocks.


I can swap one load balancer or cache for another. However, if my programming language has unfavourable terms I will have to rewrite all my code. Oh, and the knowledge I gained will be non transferable to another job or project because it'll always be a niche language. Better to spend the time learning how to get the same performance out of Numba or whatever other alternatives people mentioned here.

By the way, Redis is still BSD licensed.


There are a lot of products released under GPL, for example, that make money for their authors. It's just that they don't make money with licensing fees.


Woah - also, their license automatically becomes open source (Apache) three years from now.


The problem with these is always security updates. If you want to run on the old stuff you have to make your own security patches. Of course maybe that is exactly something that it makes sense to pay for.


I have not dug into the project yet, but if it delivers on the features it mentions it should be a game changer for a lot of companies, heavily invested in Python.

Paying to use it seems fair.


numba is already a thing though


I want the opposite. Is there a project that compiles to python (either source or bytecode)?

Sort of a graalvm for python?


WebAssembly? Try compiling something to WebAssembly and running it in python?


I haven't thought of that. It's a good idea. I know how to compile to WebAssembly but how do I run it in python?

A quick search leads to pywasm and it is even native python. But is it usable? Any other options?


Wasmer Python can be used if you want to run Wasm in Python. Hope this helps!

https://github.com/wasmerio/wasmer-python


I don't remember the name but there is a lisp that compile to python



does this support any form of FFI? It'd be nice if users could shim in lightweight APIs that clone libraries like numpy/pytorch -- it'd immediately make some machine learning super portable!


Please note, codon is not opensource. It is business source license.


Why not contribution to PyPy and Why not MyPyC


Pypy uses its own JIT. This project does AOT with LLVM. They're not compatible.

MyPyC requires type annotations to work. This does not.


Any benchmark comparisons to mypyc yet?


Can it run PyTorch, TF etc?


>"Typical speedups over Python are on the order of 10-100x or more, on a single thread. Codon's performance is typically on par with (and sometimes better than) that of C/C++"

Nice! A super-fast compiler LLVM compiler for Python! Well done!

You know, if Python is one of the world's most popular languages, and it was originally implemented as a dynamic and interpreted language (but fast compilers can be written for it, as evinced by Codon!) -- then maybe it would make sense to take languages that were implemented as compilers -- and re-implement them as dynamic interpreted languages!

Oh sure -- that would slow them down by 10x to 100x!

But, even though that would be the case -- the dynamic interpreted versions of the previous compiled-only language -- might be a whole lot more beginner friendly!

In other words, typically in dynamic interpreted languages -- a beginner can use a REPL loop or other device -- to make realtime changes to a program as it is running -- something that is usually impossible with a compiled language...

The possibilities for easy logging, debugging, and introspection of a program -- are typically greater/easier -- in interpreted dynamic languages...

Oh sure, someone can do all of those things in compiled languages too -- but typically the additional set-up to accomplish them is more involved and nuanced -- beginners typically can't do those things easily!

So, I think when I think about programming languages from this point forward -- I'm going to think about them as having "two halves":

One half which is a compiled version.

And another half -- which is a dynamic interpreted version...

Usually when a new programming language is created in world, it is created either as a compiled language or as a dynamic interpreted language -- but never both at the same time!

Usually it takes the work of a third party to port a given language from one domain to the other, usually from dynamic interpreted to compiled, but sometimes (as is sometimes the case with scripting languages derived from compiled languages), sometimes in the reverse!

Point is: There are benefits to be derived from each paradigm, both dynamic interpreted and compiled!

So why do we currently look at/think about -- most computer languages -- as either one or the other?

I'm going to be looking at all computer languages as potentially both, from this point forward...

(Related: "Stop Writing Dead Programs" by Jack Rusher (Strange Loop 2022): https://www.youtube.com/watch?v=8Ab3ArE8W3s&t=1383s)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: