> Whenever I sit down to write some code, be it a large implementation or a small function, I think about what other people (or future versions of myself) will struggle with when interacting with the code. Is it clear and concise? Is it too clever? Is it too easy to write a subtle bug when making changes? Have I made it totally clear that X is relying on Y dangerous behavior by adding a comment or intentionally making it visible in some other way?
> It goes the other way too. If I know someone well (or their style) then it makes evaluating their code easier. The more time I spend in a codebase the better idea I have of what the writer was trying to do.
What I believe you are describing is a general definition of "understanding", which I am sure you are aware. And given your 20+ year experience, your summary of:
> So the thought of opening up a codebase that was cobbled together by an AI is just scary to me. Subtle bugs and errors would be equally distributed across the whole thing instead of where the writer was less competent (as is often the case).
Is not only entirely understandable (pardon the pun), but to be expected as algorithms employed lack the crucial bit which you identify - understanding.
> The whole thing just sounds like a gargantuan mess.
As it does to most whom envision having to live with artifacts produced by a statistical predictive text algorithm.
> Change my mind.
One cannot because understanding, as people know it, is intrinsic to each person by definition. It exists as a concept within the person whom possesses it and is defined entirely by said person.
> It goes the other way too. If I know someone well (or their style) then it makes evaluating their code easier. The more time I spend in a codebase the better idea I have of what the writer was trying to do.
What I believe you are describing is a general definition of "understanding", which I am sure you are aware. And given your 20+ year experience, your summary of:
> So the thought of opening up a codebase that was cobbled together by an AI is just scary to me. Subtle bugs and errors would be equally distributed across the whole thing instead of where the writer was less competent (as is often the case).
Is not only entirely understandable (pardon the pun), but to be expected as algorithms employed lack the crucial bit which you identify - understanding.
> The whole thing just sounds like a gargantuan mess.
As it does to most whom envision having to live with artifacts produced by a statistical predictive text algorithm.
> Change my mind.
One cannot because understanding, as people know it, is intrinsic to each person by definition. It exists as a concept within the person whom possesses it and is defined entirely by said person.