The only thing I can think of when I read posts like that is the early days when crypto started to take off.
Its near the peak of the hype, but not quite there yet.
People may make drastic life changes due to AI soon. People will keep claiming its the future, and everyone else is just blind. It'll be compared to the industrial revolution (or similar).
It may take a while, and AI will always be around in products, but the hype will die down, and it'll be just another piece of tech. And all these silly hype people, with there "INSANE CRAZY THINGS IN AI THIS WEEK!!! TRULY CRAZY!" will move on to the next well-marketed mediocre tech idea.
Dont compare this to smartphones, or to the internet. LLMs are more like the invention of the "makeup application robot", or the invention of those weird animatronics that look super realistic.
I wonder when the discussion about the climate impact of LLMs will start - just to draw more parallels to the overhyped crypto bubble.
Im glad you guys are having fun, I guess. Just dont make drastic life changes based on LLMs existing.
“Weird animatronics” That description is perfect in so many ways. I love it. Also they can do most peoples jobs better, faster, and more reliably than a person can. Maybe we need to add “effective” so the description better captures what they are.
I used ChatGPT to write some bash code on Thursday. It took about five seconds. Would have taken me a couple hours to go look up the syntax I needed for the api I was connecting to. It wasn’t great, but it was good. It was fast and it was cheap. I speced out (specked? How do you even spell that word. Maybe it’s not a word but you get it) some code for a couple remote teams. I basically wanted them to do small half hour tasks so we could measure progress towards a deadline. All of the remote teams took longer and produced worse results than simply pasting my tickets in gpt as prompts.
You’re right. They’re weird robots. But they’re good at stuff and that changes things. What specifically does it change? All the things they’re good at. What are they good at? Everything.
What does computer science look like in a world where auto-generated code is good enough? I don’t know, but I’m reluctant to say “exactly how it looked ten years ago”.
The hype is real. But there’s something solid obscured behind the swirling mists. I don’t know precisely how the future will look but I expect your naysaying might age poorly.
Of course they are useful, but they arent replacing jobs with any meaningful creational spirit, if you will. They help, yes, like the invention of autocomplete or google, but theyre not nuking as many jobs as the hype suggests.
Except, crypto honestly seemed like it had utility but really didn't... I've had this thing just create pretty decent youtube shorts with zero knowledge or very little advice on how to do it, and also tweets, instagram posts, etc... it's now working on creating direct to etsy store items from midjourney using 'tiled layouts', digital downloads for paper/etc...
The people in charge of both of these projects are using it to also actually continually improve itself. The one dude isn't a developer and doesn't know how to code, he's a VC. That's insane by itself. I've been through hype trains, but this is more foundational/bigger than anything we've experienced.
Even without AGI we could create a single software that can deliver any layout a user wants, any outputs, from any inputs... it's an accounting software, ERP, CRM, Paintshop, etc... everything one app, because you don't even need to write functions, simply writing the function name and it's attributes is enough for the AI to figure it out.
>Except, crypto honestly seemed like it had utility but really didn't...
But crypto does have utility, in countries like Lebanon and Iran, there are people who are only able to live comfortably thanks to their ability to receive Monero in exchange for their work
What about an autogpt, that essentially becomes a 'replicator' for software?
Think about it, one company, one software that can do anything, be anything from an ERP system to a CRM, etc... people want different ui/ux? It can know using A/B testing/analytics exactly what ui/ux YOU want and give you a personalized one, no big deal. It can be a specialized software for real estate or the auto sales industry etc, all while training it's replacement in the background.
We're not far from having something like this, there's value in that to the person probably who creates it, but crypto never had the reach that AI has in the ways it could disrupt society for better or worse.
What about the value of people being able to do things they previously never could? I personally know people saving 30+ hours and thousands because they no longer need to employ others to do work for them. They simply prompt chatgpt. Hell, I even use it to generate website copy for clients and it's basically an instant result. LLMs and crypto shouldn't even be mentioned in the same sentence in terms of use cases
This isn't even as bad as all the other things out there already. We're going to have a term for someone who's in a relationship with an AI soon enough
Take a look at what happened with Replika. The company restricted the bots ability to have sexual conversations and we had hundreds of people posting on reddit like the company had murdered their girlfriend.
The resolution criteria for a Metaculus question[1] attempts to explain what a "weakly" general AI might entail. The question will be obsolete and the definition of AGI might need shifting if some things turn out more or less difficult than previously assumed though.
"For these purposes we will thus define "AI system" as a single unified software system that can satisfy the following criteria, all easily completable by a typical college-educated human.
Able to reliably pass a Turing test of the type that would win the Loebner Silver Prize.
Able to score 90% or more on a robust version of the Winograd Schema Challenge, e.g. the "Winogrande" challenge or comparable data set for which human performance is at 90+%
Be able to score 75th percentile (as compared to the corresponding year's human students; this was a score of 600 in 2016) on all the full mathematics section of a circa-2015-2020 standard SAT exam, using just images of the exam pages and having less than ten SAT exams as part of the training data. (Training on other corpuses of math problems is fair game as long as they are arguably distinct from SAT exams.)
Be able to learn the classic Atari game "Montezuma's revenge" (based on just visual inputs and standard controls) and explore all 24 rooms based on the equivalent of less than 100 hours of real-time play (see closely-related question.)"
The thing is - our ability is limited to our understanding. AI maybe already doing things we do not understand (which could be classified as AGI). Thinking of it like how dogs do not have the cognitive ability to understand the concept of the "future" or tomorrow - There is a good chance AI would already be doing things which are beyond our cognitive ability (including the smartest people working on the tech.)
But I am sure if we let 2 fairly good LLMs talk to each other - They'd shortly start talking things we feel are hallucinations but the 2 LLMs would understand and take it further.
Again, this is just my opinion and I have never worked on any LLMs. So an outsider.
AGI can solve problems in any general context using its capabilities, including problems never seen before.
There's weak and strong AGI. Right now I'd argue we have weak AGI with ChatGPT and its plug-ins. The GPT-tech allows for general purpose problem solving, but it often makes mistakes and gets things wrong. We are on the path towards strong AGI, and I have no idea how long it will take to get there.
OpenAI and others are developing benchmarks to test this tech against various problems.
I wouldn't consider chatGPT by itself or with plugins AGI, however the langchain+long-term memory + automated agents, yeah that's f'n AGI at least as it's aptly called: Baby AGI.
you realize, babies aren't as impressive as a PhD. right? I mean they're cute and impressive in how they learn and pick things up, baby AGI is kinda like that. the hype is basically going gaga around a baby.
I think personally we should also classify human level AGI and smart AGI. After all, the goal is not to make fake humans but to solve problems like cancer.
After watching how people are applying GPT3.5 and GPT4, I'd say both goals those types of goals will be attempted, and anything else that could be of use.
I’m not sure we have a proper definition of consciousness, so probably no.
But I suspect we’re five years from being able to create a self-directed simulation of a person that could behave in ways we consider autonomous. That is to say it will be able to fool most people observing from the outside.
I have some ideas on how I’d go about it so I imagine better minds than mine are hard at work on the problem.
I’m not sure we should be building that, but I’m sure we are.
An AGI will be able to improve itself - i.e. all the AI dev team at OpenAI will get fired, and only people policy/alignment people will remain (hopefully).
The simple definition is “its faster for a manager with medium computer skills to spin up an instance than to train a human for almost all desk jobs.” We look for other definitions because if an AI satisfying this definition showed up tomorrow, we’d all starve.
Its near the peak of the hype, but not quite there yet.
People may make drastic life changes due to AI soon. People will keep claiming its the future, and everyone else is just blind. It'll be compared to the industrial revolution (or similar).
It may take a while, and AI will always be around in products, but the hype will die down, and it'll be just another piece of tech. And all these silly hype people, with there "INSANE CRAZY THINGS IN AI THIS WEEK!!! TRULY CRAZY!" will move on to the next well-marketed mediocre tech idea.
Dont compare this to smartphones, or to the internet. LLMs are more like the invention of the "makeup application robot", or the invention of those weird animatronics that look super realistic.
I wonder when the discussion about the climate impact of LLMs will start - just to draw more parallels to the overhyped crypto bubble.
Im glad you guys are having fun, I guess. Just dont make drastic life changes based on LLMs existing.