Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
cma
6 months ago
|
parent
|
context
|
favorite
| on:
OpenAI's new reasoning AI models hallucinate more
Wouldn't reasoning training be expected to cause catastrophic forgetting of ground truth random fact stuff learned in the main training?
Do they keep mixing in the original training data?
Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
Do they keep mixing in the original training data?