100x is such a crazy claim to me - you’re saying you can do in 4 days what would have previously taken over a year. 5 weeks and you can accomplish what would have taken you a decade without LLMs.
In most cases I would never have undertaken those projects at all without AI. One of the projects that is currently live and making me money took about 1 working day with Claude Code. It’s not something I ever would have started without Claude Code, because I know I wouldn’t have the time for it. I have built websites of similar complexity in the past, and since they were free-time type endeavors, they never quite crossed the finish line into commerciality even after several years of on-again-off-again work. So how do you account that with a time multiplier? 100x? Infinite speedup? The counterfactual is a world where the product doesn’t exist at all.
This is where most of the “speedup” happens. It’s more a speedup in overall effectiveness than raw “coding speed.” Another example is a web API for which I was able to very quickly release comprehensive client side SDKs in multiple languages. This is exactly the kind of deterministic boilerplate work LLMs are ideal for, and that would take a human a lot of typing, and looking up details for unfamiliar languages. How long would it have taken me to write SDKs in all those languages by hand? I don’t really know, I simply wouldn’t have done it, I would have just done one SDK in Python and said good enough.
If you really twist my arm and ask me to estimate the speedup on some task that I would have done either way, then yeah I still think a 100x speedup is the right order of magnitude, if we’re talking about Claude Code with Opus 4.1 specifically. In the past I spent about a five years very carefully building a suite of tools for managing my simulation work and serving as a pre/post-processor. Obviously this wasn’t full-time work on the code itself, but the development progressed across that timeframe. I recently threw all that out and replaced it with stuff I rebuilt in about a week with AI. In this case I was leveraging a lot of the learnings I gleaned from the first time I built it, so it’s not a fair one-to-one comparison, but you’re really never going to see a pure natural experiment for this sort of thing.
I think most people are in a professional position where they are sort of externally rate limited. They can’t imaging being 100x more effective. There would be no point to it. In many cases they already sit around doing nothing all day, because they are waiting for other people or processes. I’m lucky to not be in such a position. There’s always somewhere I can apply energy and see results, and so AI acts as an increasingly dramatic multiplier. This is a subtle but crucial point: if you never try to use AI in a way that would even hypothetically result in a big productivity multiplier (doing things you wouldn’t have otherwise done, doing a much more thorough job on the things you need to do, and trying to intentionally speed up your work on core tasks) then you can’t possibly know what the speedup factor is. People end up sounding like a medieval peasant suddenly getting access to a motorcycle and complaining that it doesn’t get them to the market faster, and then you find out that they never actually ride it.
I wonder, have you sat down and tried to vibecode something with Claude Code? If so, what kind of multiplier would you find plausible?