Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's a thought I had. There may be a level of data and training at which large language model tend to resort more, not less, often to plausible bullshit. Someone was gushing about how smart gpt-4 looked in this post, for example.

https://twitter.com/Scobleizer/status/1560843951287898112

The more areas of study you have, the more complicated the relationship or non-relationship is. But the difficulty of and knowledge needed for bullshitting on them doesn't increase as much.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: