Here's a thought I had. There may be a level of data and training at which large language model tend to resort more, not less, often to plausible bullshit. Someone was gushing about how smart gpt-4 looked in this post, for example.
The more areas of study you have, the more complicated the relationship or non-relationship is. But the difficulty of and knowledge needed for bullshitting on them doesn't increase as much.
https://twitter.com/Scobleizer/status/1560843951287898112
The more areas of study you have, the more complicated the relationship or non-relationship is. But the difficulty of and knowledge needed for bullshitting on them doesn't increase as much.