Hacker News new | past | comments | ask | show | jobs | submit login

As your wording implies, finetuning is restricted to the smaller models, i.e. babbage, curie etc.

You can generate the training data for this with 3.5 and 4 and tune smaller models with the resulting data. For lots of tasks, this results in robust results, which btw are also faster than 3.5 turbo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: