Hacker News new | past | comments | ask | show | jobs | submit login

I mean, that's literally what happened. At least initially Tabnine was based on the GPT2 model trained on code. Then GitHub launched Copilot using the OpenAI Codex model which is based on GPT3. I guess this is why several people have commented on the marked improvement when adopting GitHub Copilot.

I have no idea how Tabnine builds their models today, and how they perform compared to Copilot. I guess one advantage could be lower latency in suggestions that they claim come from training smaller more specific models. But the way I find Copilot working for me is that it thinks about the same time I need to think and then it makes a suggestion for a good chunk of code. If my thinking and Copilot's thinking match up then I can save myself a good bit of typing.




There's a decent comparison outlining the differences by Tabnine's CEO here (disclaimer - I work there). https://twitter.com/drorwe/status/1539293063117516801




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: