The whole framework is extremely abstract. Unless human brains are doing magic that's beyond what a Turing machine can do, whatever humans are doing probably does boil down to something resembling the Solomonoff induction AIXI is based on. It's also pretty trivial to observe that humans are running some kind of very clever approximation instead of the full abstract AIXI, since we can actually do useful stuff with the amount of sensory information we get in reasonable time.
What humans do belongs to the part where you need to figure out how to make this stuff efficiently computable.
Well, that's the point. Take away the weasel word 'magic', and saying that humans are doing what a Turing machine can do is the same thing as saying they're algorithmically representable.
Turing machines are not handed down to us by God; there is no reason to believe they are some kind of Ultimate Representation of Everything.
it isn't just that you can never be sure. you can also never be sure it is false. for anything you find that is potentially uncomputable you never know if you simply haven't figured out how to compute it yet.
Doesn't contemporary cognitive science pretty much go with the hypothesis that human brains are Turing-equivalent? Unless we go with magic, hypercomputing brains would require hypercomputing physics, and so far we know pretty much about physics and all of it seems to be Turing-computable (although slowly with quantum stuff), outside pathologies like time travel.
We don't know if Turing-equivalent formalisms are the Ultimate Representation of Everything, but they seem to be by far the best Representation of Everything anyone has found so far.
>We don't know if Turing-equivalent formalisms are the Ultimate Representation of Everything, but they seem to be by far the best Representation of Everything anyone has found so far.
Well, sure... and Wolfram's 2-state-3-symbol means that an incredibly tiny machine with no memory at all is basically equivalent to any computer we've ever designed. However, that doesn't provide us any insight into how thinking works, or how to write programs, or anything really. It's a mathematical curiosity. In reality, Turing machines themselves are a terrible representation of algorithms, since despite being able to represent anything, anyone who tries to think of writing code in terms of developing simple rules for writing numbers on a strip is going to lose their mind.
Aside from the obvious problem (AIXI is undecidable), there's no real reason to believe that it represents a useful way to analyze the problem of intelligence. For one thing, no progress has really been made on the Hutter prize since its inception -- prediction by partial matching continues to win, and it was developed in the '80s.
even at a low evel the brain is compressing and throwing out such massive amounts of data that I don't think it's fair to call it solomonoff induction anymore.