Hacker News new | past | comments | ask | show | jobs | submit login

Yes. It is possible to do additional things with the model outputs or have additional prompt inputs... That is irrelevant to the fact that the intelligence -- the "trained" part -- is a fixed model. The way in which inputs and outputs are additionally processed and monitored would have completely different intelligence characteristics to the original model. They are, by definition of inputs and outputs, separate.

Models of models and interacting models is a fascinating research topic, but it is nowhere near as capable as LLMs are at generating plausible token sequences.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: