It’s a hype bubble for hundreds of years and saying that doesn’t make chatgpt worth any less. I have definitely been surprised by this and gotta say I’m expecting AGI a lot faster now. Even if literally all it did was predict what the average internet user would write in a certain context, that’s huge, cuz when you integrate all the little advantages of all the weird things one person knows another doesn’t, the collective knowledge is worth more than the sum of the parts. A tool which can tap into the sum total of human knowledge 24/7 and more rapidly than I can propose more questions for it, mainly I’m just excited to play with larger context size models so I can include more code and get big picture ideas about groups of stuff that are too numerous for my feeble meat brain to reason about. 7-9 things in working memory has always been the thing that would make humans inferior to AI in the long run. Even if it’s not that insanely smart (but realize: intelligence is a probabilistic concept and computers are great at multiplying probabilities precisely) if the thing can fit more stuff in memory than us and type faster than us and it doesn’t get tired or overwhelmed and give up (imagine your capability in a world where you had no tiredness and unlimited self discipline) in time it’s inevitable the transformers put us all to shame, and the more complicated the topic, the bigger of a shaming it’ll be, since the more complicated topics have exponentially more relations to reason about. Who’s gonna trust a human doctor to diagnose their stuff if the human brain holds 9 things and the AI holds thousands?
"Who’s gonna trust a human doctor to diagnose their stuff if the human brain holds 9 things and the AI holds thousands?"
The human brain can hold much more than 9 things and even though AI will be used in medicine broadly very soon, I really want the final diagnosis done by a human.
Once true AGI arrieves, I might change my opinion, but that might take a while.
9 things is considered a standard for working memory (kind of like processor registers), for people with ADHD it's even less - 3-5.
Try writing a number from one piece of paper to another. If it's more than 7-9 numbers, you won't do it in one shot, unless you spend extra time memorizing it.
That can be increased quite a bit with practice. But it's also not important. It's just the cache memory -- it isn't the limit of what can be learned and recalled.
It is a limit on what you can reason about without a piece of paper.
I’m proficient at math, but my working memory is around 6, so I cannot add two three digit numbers to each other in my head (unless I see numbers to be added in front of me).
Revolutions do happen but not the way we expect. My anecdotical experience: no one in my team of about 30 people developing SW uses ChatGPT or similar in their day to day. This may change, or not.
AI is being used in medicine already. For example, in diagnostics. Most new diagnostics devices (e.g. CT scan, cardiograms) include AI systems that suggest an interpretation and point towards possible problems that a doctor might occasionally miss.
Granted, currently deployed systems are mostly awful, way behind the state of the art, and therefore mostly useless. Maybe it's because designing medical devices and getting them approved takes so long. Maybe it's because the manufacturers put AI in there for marketing purposes only, while assuming nobody will use the suggestiona anyway. In any case, I strongly expect the trend to continue and these systems to become very useful quite soon.
> Who’s gonna trust a human doctor to diagnose their stuff if the human brain holds 9 things and the AI holds thousands?
I will. As another commenter says, the brain isn't limited to 9 things at all. There's no way that I'll trust the diagnosis of a machine that won't understand me.
If a doctor uses AI to help with research, that would be OK. Just so long as the doctor is actually the one doing the diagnosis and prescribing the treatment.