Apologies for an open-ended question but: does anyone know if there is a term for something like Turing-completeness within AI, where a certain level of intelligence can simulate any other type of intelligence like our brains do?
For example, using DeMorgan's theorem, we can build any logic circuit out of all NAND or NOR gates:
Dall-E 2's level of associative comprehension is so far beyond the old psychology bots in the console pretending to be people, that I can't help but wonder if it's reached a level where it can make any association.
For example, I went to an AI talk about 5 years ago where the guy said that any of a dozen algorithms like K-Nearest Neighbor, K-Means Clustering, Simulated Annealing, Neural Nets, Genetic Algorithms, etc can all be adapted to any use case. They just have different strengths and weaknesses. At that time, all that really mattered was how the data was prepared.
I guess fundamentally my question is, when will AGI start to become prevalent, rather than these special-purpose tools like GPT-3 and Dall-E 2? Personally I give it less than 10 years of actual work, maybe less. I just mean that to me, Dall-E 2 is already orders of magnitude more complex than what's required to run a basic automaton to free humans from labor. So how can we adapt these AI experiments to get real work done?
This is my feeling as well, that the rise of AGI conveniently coincides with the end of the world. I find it demoralizing because so many trends look just like that, where solving the ultimate problem results in the destruction of the context in which the original problem resided.
> Apologies for an open-ended question but: does anyone know if there is a term for something like Turing-completeness within AI, where a certain level of intelligence can simulate any other type of intelligence like our brains do?
> So how can we adapt these AI experiments to get real work done?
You're missing a step here - the difference between "imagining doing something" and "actually doing something". An ML model can produce thoughts, but that isn't necessarily the same direction of research as actually doing things in real life, much less becoming superhuman and taking over the world etc.
In your imagination, everything always goes your way.
Thank you, that's just the sort of breadcrumb I was looking for!
I'm in a bit of a rush and don't know the term for this offhand, but I remember hearing that single-layer neural networks are equivalent to multi-layer ones:
There are probably more insights like this out there. These equivalences allow us to think in abstractions that get us above the minutia of fine-tuning these algorithms so that we can see the big picture. I think.
> does anyone know if there is a term for something like Turing-completeness within AI, where a certain level of intelligence can simulate any other type of intelligence like our brains do?
Almost everything stated here is simply wrong or misinformed.
>For example, I went to an AI talk about 5 years ago where the guy said that any of a dozen algorithms like K-Nearest Neighbor, K-Means Clustering, Simulated Annealing, Neural Nets, Genetic Algorithms, etc can all be adapted to any use case. They just have different strengths and weaknesses. At that time, all that really mattered was how the data was prepared.
How do you suppose KNN is going to generate photorealistic images? I don't understand the question here
>I guess fundamentally my question is, when will AGI start to become prevalent, rather than these special-purpose tools like GPT-3 and Dall-E 2?
Actual AGI research is basically non-existant, and GPT-3/Dall-E 2 are not AGI-level tools.
>Personally I give it less than 10 years of actual work, maybe less
Lol...
>I just mean that to me, Dall-E 2 is already orders of magnitude more complex than what's required to run a basic automaton to free humans from labor.
I appreciate your sentiment but can't agree with it. What I mean is, if I had the resources to not have to work for 10 years, I give myself greater than a 50% chance of building an AGI. So I don't understand why the world is taking so long to do it.
The flip side is that these narrow use cases progressed so quickly that we have to worry about stuff like deep fakes now.
Something's not right here.
As a programmer, I feel that what went wrong is that we invested too much in profit-driven endeavors, basically stuff that's mainstream. To be blunt, the academic side of me doesn't care about use cases. I care about theory, formalism, abstraction, reproducibility, basically the scientific method. From that perspective, all AI is equivalent, it just takes input, searches a giant solution space using its learned context as clues, and returns the closest solution it can in the time given. It's an executable piping data around. The rest is hand waving.
And given that, the stuff that AI is doing now is orders of magnitude more complex than running a Roomba. But a robot vacuum actually helps people.
To answer your question, a KNN could solve this if the user reshapes the image data into a different coordinate system where the data can be partitioned (all inference comes down to partitioning):
Tensors are about reshaping data into a coordinate system where relationships become obvious, like going from rectangular to polar coordinates, or using a Fourier transform:
My frustration with all of this is the same one I have with physics or any other evolving discipline. The lingo obfuscates the fundamental abstractions, creating artificial barriers to entry.
Edit: I should add a disclaimer here that my friend and I worked on a video game for like 11 years. I'm no expert in AI, I'm just acutely sensitive to how the realities of the workaday world waste immeasurable potential at scale.
For example, using DeMorgan's theorem, we can build any logic circuit out of all NAND or NOR gates:
https://www.electronics-tutorials.ws/boolean/demorgan.html
https://en.wikipedia.org/wiki/NAND_logic
https://en.wikipedia.org/wiki/NOR_logic
Dall-E 2's level of associative comprehension is so far beyond the old psychology bots in the console pretending to be people, that I can't help but wonder if it's reached a level where it can make any association.
For example, I went to an AI talk about 5 years ago where the guy said that any of a dozen algorithms like K-Nearest Neighbor, K-Means Clustering, Simulated Annealing, Neural Nets, Genetic Algorithms, etc can all be adapted to any use case. They just have different strengths and weaknesses. At that time, all that really mattered was how the data was prepared.
I guess fundamentally my question is, when will AGI start to become prevalent, rather than these special-purpose tools like GPT-3 and Dall-E 2? Personally I give it less than 10 years of actual work, maybe less. I just mean that to me, Dall-E 2 is already orders of magnitude more complex than what's required to run a basic automaton to free humans from labor. So how can we adapt these AI experiments to get real work done?