Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looks like 8K context length. Seems to compare well against Gemini Pro 1.5 and Claude 3 Sonnet according to the included benchmarks.


If it's limited to 8k context length then it's not competing with sonnet at all IMO. Sonnet has a 200k context length and it's decent at pulling stuff from it, with just an 8k context length this model won't be great for RAG applications, instead it'll be used for chat and transforming data from one type to another.


They explain that they will be releasing longer context lengths in the future.

It’s better to make your RAG system work well on small context first anyway.


While that's true when you're dealing with a domain that's well represented in the training data and your return type isn't complicated, if you're doing anything nuanced you can burn 10k tokens just to get the model to be consistent in how it answers and structures output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: