Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe it's less about "Human VS Robot" and more about exposure to "Original thoughts VS mass-produced average thoughts".

I don't think a human mind would be improving if they're in a echo-chamber with no new information. I think the reason the human mind is improving is because we're exposed to new, original and/or different thoughts, that we hadn't considered or come across before.

Meanwhile, a LLM will just regurgitate the most likely token based on the previous one, so there isn't any originality there, hence any output from a LLM cannot improve another LLM. There is nothing new to be learned, basically.



> I don't think a human mind would be improving if they're in a echo-chamber with no new information

If this were true of humans, we would have never made it this far

Humans are very capable of looking around themselves and thinking "I can do better than this", and then trying to come up with ways how

LLMs are not


> Humans are very capable of looking around themselves and thinking "I can do better than this"

Doesn't this require at least some perspective of what "better than this" means, which you could only know with at least a bit of outside influence in one way or another?


Every human has feelings and instincts, they answer what "better than this" means.

Yes, even in math and science, those were built on top of our feeling of "better than this" iterated over thousands of years.


Parsimony, explanatory power, and aesthetics. These are things that could be taught to a computer, and I think we will. We had to evolve them too.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: