> it could in theory given enough time and resources.
In theory given enough time and resources, anyone can defeat any grandmaster in Chess: just compute the extended tree form of the game and run the minimax algorithm.
The "given enough time and resources" clause makes everything that follows meaningless, unless a reasonable algorithm is presented.
> It is like quantum computing.
It is absolutely not like quantum computing. Shor's algorithm is something you can look up right now. It is precise and well-defined. The problems we are facing with quantum computation are related to the fact that we can't really build reliable hardware. But we know that given such machines the algorithm would work. We have precise bounds and requirements on those machines.
As far as AGI goes, we have absolutely no idea. There's lively debate on whether anything we have done even counts as significant advancement towards AGI.
> In theory given enough time and resources, anyone can defeat any grandmaster in Chess: just compute the extended tree form of the game and run the minimax algorithm.
Yes, that's why we're considered to be generally intelligent. It is exactly the point, and not at all meaningless. Right now there's no machine that can come up with the idea to run an extended tree form of the game and minimax the algorithm. If there was such a machine, then that machine would be considered AGI.
> It is absolutely not like quantum computing.
I meant in the sense that just that it has actually been achieved, it doesn't mean it's as powerful as we have described in the theory. In theory you can use Shor's algorithm to break encryption, in practice the devices we have today have trouble with 2 digit numbers.
The same principle goes for AGI. If someone releases an AGI system today, it doesn't mean that tomorrow we'll see a Boston Dynamics robot hop on a bicycle to his day job as a Disney movie art director. The world would most likely not change at all, at least not for a while, many people would not recognise the significance and many people might not even recognise the fact that it is in fact AGI.
> As far as AGI goes, we have absolutely no idea. There's lively debate on whether anything we have done even counts as significant advancement towards AGI.
You might think that, and that says something about what side of the debate you're on. We're commenting here on the thread of an article about DeepMind asseting that reinforcement learning is enough to reach general AI. If that's true (and I think it is), then we've probably reached general AI already.
In theory given enough time and resources, anyone can defeat any grandmaster in Chess: just compute the extended tree form of the game and run the minimax algorithm.
The "given enough time and resources" clause makes everything that follows meaningless, unless a reasonable algorithm is presented.
> It is like quantum computing.
It is absolutely not like quantum computing. Shor's algorithm is something you can look up right now. It is precise and well-defined. The problems we are facing with quantum computation are related to the fact that we can't really build reliable hardware. But we know that given such machines the algorithm would work. We have precise bounds and requirements on those machines.
As far as AGI goes, we have absolutely no idea. There's lively debate on whether anything we have done even counts as significant advancement towards AGI.