Reputation is, IMO, the key. And not just for code, but for natural language too.
We're going to have to get used to publication being more important than authorship: I published this, therefore I stand behind every word of it. It might be 5% chatbot or 95% chatbot, but if it's wrong, people should hold me to account, not my tools.
No "oh, the chatbot did it" get out of jail free cards.
I think that this is a big reason that agents aren’t prevalent as one might otherwise expect. Quality control is very important in my job (legal space, but IANAL), and I think while LLMs could do a lot of what we do, having someone whose reputation and career progression is effectively on the line is the biggest incentive to keep the work error free - that dynamic just isn’t there with LLMs.
Right: the one thing an LLM will never be able to do is stake their credibility on the quality or accuracy of their output.
I want another human to say "to the best of my knowledge this information is worth my time". Then if they waste my time I can pay them less attention in the future.
Bingo. Accountability is one of the most important aspects that makes people be fearful and complain about LLMs because they essentially want to avoid having it.
If you can't explain why you're putting some code you don't understand it and it's not really acceptable.
We're going to have to get used to publication being more important than authorship: I published this, therefore I stand behind every word of it. It might be 5% chatbot or 95% chatbot, but if it's wrong, people should hold me to account, not my tools.
No "oh, the chatbot did it" get out of jail free cards.