(reposting from locallama and lower down here) yep that's true.
one of my goals is to inspire and honor those that work on open source AI. Those people tend to be motivated by things like impact and the excitement of being part of something big. i know that's how i always feel when i'm around Berkeley and get to meet or work with OG BSD hackers or the people who helped invent core internet protocols.
those people are doing this kind of OSS work and sharing it with the world anyway, without any cash prize. i think of this as a sort of thank you gift for them. and also a way to maybe convince a few people to explore that path who might not have otherwise.
And Linux kernel, curl, SQLite and many other open source software are worth infinitely more than the purchase price.
Also, you cut off the "from the benchmark" part; this doesn't expect it to solve any random Github issue, just the ones from the (presumably manually vetted and cleaned up) bench dataset.
Linux kernel, curl, and SQLite don't require significant compute cost to develop that put it out of reach of hobbyists, and only in the reach of organizations expecting a positive ROI.
Also, the prize doesn't require you to train a new foundational model, just that whatever you use is open weights or open source.
Theoretically, might be get away with a Llama3.3 (or any other model which you think makes sense) with a cleverly designed agentic system and a fresh codebase-understanding approach, with minimal compute cost.
(ok, probably not that easy, but just saying there's much more to AI coding that the underlying model)
I followed your link, but it doesn't seem to bear out upur assertion. The two numbers mentioned in the article are
176 mil and 612 mil. Mind you those weren't an estimate of cost, but rather an estimate to replace. Article is dated 2004, with an update in 2011.
Using the lines-of-code estimation it crossed a billion in 2010 - again to replace. That has no relation to what it did actually cost.
Getting from there to "tens of billions" seems a stretch. Assuming a bottom value in your estimate of 20 billion, and assuming a developer costs a million a year, that's 20 000 man-years of effort. Which implies something like 2000 people (very well paid people) working continuously for the last decade.
> The two numbers mentioned in the article are 176 mil and 612 mil.
Those two numbers are from the intro. The postscript and the updates at the end mention $1.4b and $3b respectively.
The real cost is probably impossible to calculate, but that order of magnitude is a reasonable estimate IMHO, and absolutely comparable, or even larger, than compute costs for SOTA LLMs
There are around 5000 active kernel devs, they are generally highly skilled and therefore highly paid, and they've been working for a lot longer than 10 years.
So doesn't seem that unlikely based on your estimates.
Linux kernel has been in development since the nineties, not just for the last ten years. Also 5000 contributors is a lot more than 2000 from gp's comment.
Let's ignore the years before dotcom boom since the dev community was probably much smaller, and assume an average of 3500 contributors since.
That's 25 years * 3500 contributors on average * 200k salary (total employee cost, not take home) = $17.5b
If you're the only one that can come close. Kaggle competition prizes are about focusing smart people on the same problem. But it's very rare for one team to blow all the others out of the water. So if you wanted to make a business out of the problem kaggle will (probably) show the best you could do and still have no moat.
I hope the competition will inspire people to make breakthroughs in the open, so I won't take any rights to the IP, instead the winning solutions must use open source code and open weight models.
It's 90% of a selection of new GitHub issues, we don't know about the complexity of these. I don't think they'd ask the AI for a giant refactoring of the codebase, for example.
SWE-bench with a private final eval, so you can't hack the test set!
In a perfect world this wouldn't be necessary, but in the current research environment where benchmarks are the primary currency and are usually taken at face value, more unbiased evals with known methodology but hidden tests are exactly what we need.
Very cool to see "outcome oriented" prizes like this -- it's another way to fund research, perhaps. Will be curious to track who does this and whether success in the prize correlates with deep innovation ...
In reponse to my comment of "Realistically, an AI that can perform that well is worth a lot, lot more than $1M.", he said:
> yeah i agree. one of my goals is to inspire and honor those that work on open source AI.
> people who work on open source tend to be motivated by things like impact and the excitement of being part of something bigger than themselves - at least that's how i always feel when i'm around Berkeley and get to meet or work with OG BSD hackers and people who helped invent core internet protocols or the guys who invented RISC or more recently RISC-V
> those people are going to do this kind of OSS work and share it with the world anyway, without any cash prize. i think of this as a sort of thank you gift for them. and also a way to maybe convince a few people to explore that path who might not have otherwise.
This is why AI advances so quickly. There are easy economic mechanisms to encourage it, while AI safety laws have to go through an arduous process. Seems rather lopsided when the technology can be potentially dangerous. We should have mechanisms to take a step back and examine this stuff with more caution, mechanisms which have equal force to economic force but we don't. The Amish have a much better model.
Really? Let's say the bottom 70% of earners in the U.S. decided that AI was dangerous and its development should be stopped. Do you think the top 30% would allow that?
My entire point is that we don't have mechanisms to protect the people. I was not referring to the power structure of the nations, or those with the most money.
The only reasonable way to cheat on this would be to find real bugs in many repos, train your models on the solutions, wait till the cut-off period, report those bugs, propose PRs and hope your bugs get selected. Pretty small chances, tbh and probably not worth the rewards (the 90% solve rate is pretty much impossible given the constraints - 4x L4s and ~4-6min / problem. There's no way any models that can be ran on those machines under those time limits are that much better than the SotA frontier models)
1) since we are creating a contamination-free version of SWE-bench (i.e. scraping a new test set after submissions are frozen) it is guaranteed that agents in this contest can't "cheat", i.e., models can't have trained on the benchmark / agents cant memorize answers.
2) as a general rule in life, don't cheat on things (not that there aren't exceptions)
https://kprize.ai/