Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
obastani
on Dec 23, 2022
|
parent
|
context
|
favorite
| on:
SantaCoder: A new 1.1B code model for generation a...
Just out of curiosity, in what sense is Codex is better trained than CodeGen?
moyix
on Dec 23, 2022
[–]
OpenAI hasn't said exactly how they trained code-davinci-002 so this is speculative, but I'm reasonably sure it was trained on more data and languages than CodeGen and for longer. It was also trained using fill-in-the middle [1].
[1]
https://arxiv.org/abs/2207.14255
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: