Hacker News new | past | comments | ask | show | jobs | submit login

I seem to recall doing some math and finding out the limit for some models of LLM was pretty close to that. If so, this book is likely the result of a single prompt, which seems to me like a lazy way to be lazy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: