I often hear the idea of digital is faster then biology. This seems mostly derived from small math computations.
Yet it seems the current form of large language computations is much much slower then our biology. Making it even larger will be necessary to come closer to human levels but the speed?
If this is the path to GI, the computational levels need to be very High and very centralized.
Are there ways to improve this in its current implementation other the cache & more hardware?
OpenAIs modus operandi is basically "does it get better if we make it bigger". Of course they are constrained by economical factors like the cost of training and inference, but if they have the choice between making the model better or more efficient they choose the better model.
I believe over the next years (and decades) we will figure out that a lot of this can be done much more efficiently.
Another problem with the analogy to humans if obviously that these models know much more than any one human can remember. They are trained on our best approximation of the total sum of human knowledge. So the comparison to any one human will always be fraught with problems.
This is probably not the path to GI. First we would need a precise scientific formalism to accurately describe intelligence, which currently does not exist. Second, it may or may not end up being tied to consciousness, and there's a thing called the hard problem of consciousness, that possibly might not be solvable.
It might end up being the kind of thing where if you want to accurately model consciousness, you would need a computer the size of the universe, and it's gotta run for like 13.8 billion years or so, but that's just my own pure speculation - I don't think anybody even has a clue on where to start tackling this problem.
This is not to discourage progress in the field. I'm personally very curious to see where all this will lead. However, if I had to place a bet, it wouldn't be on GI coming any time soon.
Seems like it, doesn't it? I'd be curious to see if and how we can get there.
What I was highlighting are some serious challenges along the way that might end up leading us to insights about why it might be harder than we think, or why there may be factors that we aren't considering.
It's very easy to say "brain is made of fundamental particles and forces, all we have to do is create a similar configuration or a model of them," but it's precisely in the task of understanding the higher order patterns of those fundamental particles and forces where we seem to run into some serious challenges that as of yet remain unaddressed.
The AI/ML way of approaching this is more of a top-down approach, where we just sort of ignore the fact that we don't understand how our own brains/minds work and just try to build something kind of like it in the folksy sense. I'm not discouraging that approach, but I'm very curious to see where it will lead us.
Yet it seems the current form of large language computations is much much slower then our biology. Making it even larger will be necessary to come closer to human levels but the speed?
If this is the path to GI, the computational levels need to be very High and very centralized.
Are there ways to improve this in its current implementation other the cache & more hardware?