Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not a hot take, I think you're right. If it was scaled up to 70b, I think it would be better than Llama 2 70b. Maybe if it was then scaled up to 180b and turned into a MoE it would be better than GPT-4.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: