I think this conversation gets away from itself very quickly if you run with generic terms: data, prediction, and so on.
People think that NNs/ML/etc. works, and now the relevant questions are philosophical: if it works, how.
But it doesnt work. In very silly trivial ways. It's only taking averages over historical data. And all it can do is that.
As soon as you are alive to what thinking about things requires (ie., a live embeddedness into an environment, imagining scenarios, etc.) it's trivial to expose the magic show. It isnt philosophical, it's quite literally just broken and doesnt work.
I cannot ask GPT, "do you like what i'm wearing?", or "what would it take to change your mind on drug legalisation?" -- and so on. Indeed, I can't ask it, "if I have seven eggs, should I make a cake with 6 and keep the 7th -- or should I just use 7?".
I can't ask it anything which actually requires it to reason with concepts, because it doesnt have any; and I cant ask it about anything around now because it is just summarising what text has already been produced.
The whole AI hype industry has a vested interest in the philosophy here, ask its premised on everythign working. Almost nothing works. The idea that GPT presents any philosophical challenges is nearly absurd; about as many as a shredder at a library.
> I can't ask it anything which actually requires it to reason with concepts, because it doesnt have any; and I cant ask it about anything around now because it is just summarising what text has already been produced.
I think you might have too strong of a belief that your subjective experience of reasoning is indicative of the actual machinery. I think it would be crazy to reject this view entirely, but it also seems possible to me that the whole experience of "reasoning with concepts" is not actually what we do at all, but merely what we experience when we do something else. Essentially, some variant on Dennett's intentional stance but applied to ourselves.
I'm always really happy to get to a technical description -- because I think these philosophical replies make it seem like we're talking about a working system. And there isnt one.
So what i'm interested in is machine systems for which we have (1) no prior specification of the environment; and which in the end (2) anticipate permutations in the environment which arent specified.
intelligence is the solution to this, for (1) form concepts which carve out the environment using your body; and (2) use these concepts to engage in counter-factual reasoning which simulate possible change.
I think a fatal issue for AI is that these arent even in the class of problems being addressed. Talking about intentionality here gives the field vastly more credit than it deserves. We arent even at the level of basic concept formation.
> So what i'm interested in is machine systems for which we have (1) no prior specification of the environment; and which in the end (2) anticipate permutations in the environment which arent specified.
But there are no biological systems that do this either (other than to the extent that by virtue of their embodiement, they may necessarily come more of a builtin "understanding" of the (physical) environment)).
Yes, I dont think intelligence is cognitive -- it's the content of cognition. Where does that come from? Essentially our bio-organic structure as distributed across the body, and esp. motor cortext.
In otherwords, the AI problem is unsolvable by any system. It's the wrong problem. Cognition, as a formal structure, isnt intelligent.
> Yes, I dont think intelligence is cognitive -- it's the content of cognition. Where does that come from? Essentially our bio-organic structure as distributed across the body, and esp. motor cortext.
All right, so where does that (the organic structure including the motor cortex that creates cognition) come from?
I like your comments, even though I fall into the DL fanboy camp.
I hope that you read Gary Marcus’s and Ernest Davis’ book “Rebooting AI” - I think it would resonate with you, and it made me get a lot more interested in so-call “hybrid-AI” systems.