Yeah I kind of agree that when LLMs work REALLY well for autocomplete of your codebase -- that might be an indication that the language and library abstractions you use don't fit the problem very well.
Code is read more than it's written. And it should be written to be read.
If you are barfing out a lot of auto-completed stuff, it's probably not very easy to read.
You have to read code to maintain it, modify it, analyze its performance, handle production incidents, etc.
> If you are barfing out a lot of auto-completed stuff, it's probably not very easy to read.
From my experience using LLMs, I'd guess the opposite. LLMs aren't great at code-golf style, but they're great at the "statistically likely boilerplate". They max out at a few dozen lines at the extreme end, so you won't get much more than class structures or a method at a time, which is plenty for human-in-loop to guide it in the right direction.
I'm guessing the LLM code at Google is nearly indistinguishable from the rest of it for a verbose language with a strong style expectation like java. Google must have millions of lines of Java, and a formatter that already maintains standards. An LLM spitting out helper methods and basic ORM queries will look just like any other engineers code (after tweaking to ensure it compiles).
If you already apply a code-formatter or a style guide in your organization, I'm guessing you'd find that LLM code looks and reads a lot like the rest of your code.
Yes, it can make stuff that fits in the rest of the codebase
But I am saying it's not going to ever make the code significantly better
In my experience, code naturally gets worse over time as you add features, and make the codebase bigger. You have to apply some effort and ingenuity to keep it manageable.
So if everyone is using LLMs to barf out status quo code, eventually you will end up with something not very readable. It might look OK locally, but the readability you care about is a global concern.
This exactly. There is so much boilerplate involved in writing anything inside Google.
AI was great to cut that down a bit. It's still nowhere near what it's like in the outside and/or non-Java world.
Which isn't to say that this isn't progress - just that that stat should be taken with context.