If that's the case, it turns out that what I want is a system that's "overfitted to the dataset" on code, since I'm getting incredibly useful results for code out of it.
(I'm not personally interested in the whole AGI thing.)
Good man I never said anything about AGI. Why do you keep responding to things I never said?
This whole exchange was you having knee-jerk reactions to things you imagined I said. It has been incredibly frustrating. And at the end you shrug and say "eh it's useful to me"??
I am talking about this because of deceitfulness, resource efficiency, societal implications of technology.
"That is the premise of LLM-as-AI" - I assumed that was an AGI reference. My definition of AGI is pretty much "hyped AI". What did you mean by "LLM-as-AI"?
In my own writing I don't even use the term "AI" very often because its meaning is so vague.
(Worse than that, I said "... is uninformed in my opinion" which was rude because I was saying that about a strawman argument.)
I did that thing where I saw an excuse to bang on one of my pet peeves (people saying "LLMs can't create new code if it's not already in their training data") and jumped at the opportunity.
I've tried to continue the rest of the conversation in good faith though. I'm sorry if it didn't come across that way.
Simon, intelligence exists (and unintelligence exists). When you write «I'm not claiming LLMs can invent new computer science», you imply intelligence exists.
We can implement it. And it is somehow urgent, because intelligence is very desirable wealth - there is definite scarcity. It is even more urgent after the recent hype has made some people perversely confused about the idea of intelligence.
(I'm not personally interested in the whole AGI thing.)