Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Supposedly they were training on feedback provided by the plugin itself but that approach doesn't make sense to me because:

- I don't remember the shortcuts most of the time.

- When I run completions I double take and realise they're wrong.

- I am not a good source of data.

All this information is being fed back into the model as positive feedback. So perhaps reason for it to have gone downhill.

I recall it being amazing at coding back in the day, now I can't trust it.

Of course, it's anecdotal which is also problematic in itself but I have definitely noticed the issue where it will fail and stop autocompleting or provide completely irrelevant code.



It could also be that back in the day they were training with a bit more code than they should have been (eg private repos) and now the lawyers are more involved the training set is smaller/more sanitized.

Pure speculation of course.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: