At the end: "Thank you to GPT 4o and o4 for discussions, research, and drafting."
At first I thought that was a nice way to handle credit, but on further thought I wonder if this is necessary because the base line assumption is that everyone is using LLMs to help them write.
A counter point is that googling "thank you linux" turns up a lot of hits. "thank you linux for opening my eyes to a bigger world" is a typical comment.
I thought the article was well written. I'm assuming the author did most of the writing because it didn't sound like AI slop. I also assume he meant he uses AI to assist, not as the main driver.
It really wasn't well written. I contains factual errors that stand out like lighthouses showing the author had an idea about an article but doesn't actually know the material.
> I contains (sic) factual errors that stand out like lighthouses showing the author had an idea about an article but doesn't actually know the material.
Whoops ^ To be fair, technically, I also contain some factual errors, if you consider the rare genetic mutation or botched DNA transcription.
So far, I haven't found anything that I would consider to be a glaring factual error. What did I miss?
I'm not talking merely about a difference in imagination of how the past might have unfolded. If you view this as an alternative history, I think the author made a plausible case. Certainly not the only way; reasonable people can disagree.
I meant it was readable. It's speculative but it's well-informed speculation, not clueless nonsense. I agree that fact checking becomes more important because LLMs hallucinate. I feel the same about vibe coding. If you don't know much about programming then running vibe code is a risky bet (depending on the criticality of the problem)
At first I thought that was a nice way to handle credit, but on further thought I wonder if this is necessary because the base line assumption is that everyone is using LLMs to help them write.