Hacker News new | past | comments | ask | show | jobs | submit login

> BUT where it will happen is when multiple instances of these language models compete with each other.

That's what everyone else is saying already. Not sure what exactly you are arguing against.




Sorry a bit late replying-been away. It is not the competition that is bad, it that anything produced by them cannot be tweaked and so become "circular" sources. There will be no way to test for "truth", at least on an individual bot the training data can be tweaked to not use its own production as source. The competition will make the validity or "truth" of most data questionable. I guess it should be possible for an individual LMM to be trained for "truth" (reality?) but it becomes almost impossible for a LMM to discern truth when the sources it is analyzing are of generated by another LMM




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: