If it was self-evident then I wouldn’t need to ask for evidence. And I imagine you wouldn’t need to be waving your hands making excuses for the lack of evidence.
To me it's self-evident, but is probably one casual step removed from what you'd like to see. I can't point to specific finished or released projects that were substantially accelerated by use of GenAI[0]. But I can point out that nearly everyone I talked with in the last year, that does any kind of white-collar job, is either afraid of LLMs, actively using LLMs at work and finding them very useful, or both.
It's not possible for this level of impact at the bottom to make no change on the net near the top, so I propose that effects may be delayed and not immediately apparent. LLMs are still a new thing in business timelines.
TL;DR: just wait a bit more.
One thing I can hint at, but can't go into details, is that I personally know of at least one enterprise-grade project whose roadmap and scoping - and therefore, funding - is critically dependent on AI speeding up significant amount of development and devops tasks by at least 2-3x; that aspect is understood by both developers, managers, customers and investors, and not disputed.
So, again: just wait a little longer.
--
[0] - Except maybe for Aider, whose author always posts how much of its own code Aider wrote in a given release; it's usually way above 50%.
> One thing I can hint at, but can't go into details, is that I personally know of at least one enterprise-grade project whose roadmap and scoping - and therefore, funding - is critically dependent on AI speeding up significant amount of development and devops tasks by at least 2-3x; that aspect is understood by both developers, managers, customers and investors, and not disputed.
Mm. I can now see why, in your other comment, you want to keep up with the SOTA.
It's actually unrelated. I try to keep up with the SOTA because if I'm not using the current-best model, then each time I have a hard time with it or get poor results, I keep wondering if I'm just wasting my time fighting with something a stronger model would do without problems. It's a personal thing; I've been like this ever since I got API access to GPT-4.
My use of LLMs isn't all that big, and I don't have any special early access or anything. It's just that the tokens are so cheap that, for casual personal and professional use, the pricing difference didn't matter. Switching to a stronger model meant that my average monthly bill went from $2 to $10 or something. These amounts were immaterial.
Use patterns and pricing changes, though, and recently this made some SOTA models (notably o3, gpt-4.5 and the most recent Opus model) too expensive for my use.
As for the project I referred to, let's put it this way: the reference point is what was SOTA ~2-3 months ago (Sonnet 3.7, Gemini 2.5 Pro). And the assumptions aren't just wishful thinking - they're based on actual experience with using these models (+ some tools) to speed up specific kind of work.
Schroedingers AI. It's everywhere, but you can't point to it cause it's apparently indistinguishable from humans, except for the shitty AI which is just shitty AI.