I hesitate to be this pessimistic. My current position - AI generated code introduces new types of bugs at a high rate, so we need new ways to prevent them.
That's the "outer loop" problem. So many companies are focused on the "inner loop" right now: code generation. But the other side is the whole test, review, merge aspect. Now that we have this increase in (sometimes mediocre) AI-generated code, how do we ensure that it's: