I agree, and one more thing is that it _is_ useful.
If somebody thinks it's just a bullshit generator and 100 million people using it after 2 months are wrong, the problem is with the person who didn't put in the effort to learn to use it effectively.
> If somebody thinks it's just a bullshit generator and 100 million people using it after 2 months are wrong, the problem is with the person who didn't put in the effort to learn to use it effectively.
Alternative explanation: Most people are happy to generate bullshit, and love that they now have a way to do it with zero effort. Bullshit copy, bullshit art, bullshit code — the sky's the limit!
That's sometimes easy, sometimes not so easy due to the boilerplate requirements of the language/framework/library one uses. Other times it introduces a trade-off between repetition and code complexity/readability, so it's not the silver bullet your comment suggests it to be.
My humble prediction is that this will happen regardless of AI quality improvements. People imagine socialist scifi scenarios with crystal balls on green lawns telling them how to build spaceships. But reality will organically grow from where we are now:
Attention and profiling is money and power.
AI will steal attention by spamming identities, voting for/against products and opinions, discriminating in ways hard to reveal, at scale never seen before. It will read all your HN comments and clip a tag onto your ear. AI war will not be a T1000 walking on skulls, it will happen in our feeds, and platforms/governments will have no idea who’s artificial or russian this time.
Honestly it’s pretty concerning when I think about what can be done with the internet and societies by bad actors automating and weaponizing existing chatgpt and pic/video tech alone.
Which, if my experience with users is anything to go by, will be 99% of the users. Even worse is people asking it about subjects they have no domain expertise in and having to wonder (or worse just assume) its correct. At the very least, it should give some easily understood indication of 'confidence'.
edit: I get very strong flashbacks to when wikipedia was new and people had to learn the hardway it wasn't always correct/up-to-date/etc.
The thing is, it's always confident. It gave you the highest confidence answer, out of many high confidence answers. Low confidence would imply the model has not seen the particular words before. A Markov chain does not suffer from imposter syndrome.
Heh - I say the thing is to get the general public to realize that, and not assume "computers are always right" ? (eg, I don't think the problem is with chatGPT being 'wrong' - I think the problems start when people assume its right...)
If somebody thinks it's just a bullshit generator and 100 million people using it after 2 months are wrong, the problem is with the person who didn't put in the effort to learn to use it effectively.