Hacker News new | past | comments | ask | show | jobs | submit login

Now imagine the LLM is trained not on words but on syllables. Same language, just dealing with it at the syllable level.

Doesn’t the problem then go from “can it invent” (yes, it can) to “are the inventions good?”

For some definitions of “good” you could automate the check. I think that is where it’s going to get both really useful, and really bizarre.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: