> Stochastic processes and control theory, as in Çinlar or Bertsekas, will definitely contribute towards strong AI, if we ever achieve that.
My view is that for high end applications of computing now, drawing from, building on, Çinlar or Bertsekas is about the best we can do and, due to current computing and the Internet, suddenly terrific.
But my view of something real in AI, say, as good as a kitty cat, ..., human will much more directly use very different approaches; that if in the basic core programming there is some math in there, then it will be darned simple.
So, my current view is that the good approaches to such AI will make direct use of little or no pure or applied math. Instead, my guess is that animal ... human intelligence is just some dared clever programming, rediscovered and re-refined so far many times here on earth. From the many times, my guess is that there is basically one quite simple way to do it.
My guess is that the sensory inputs, first, feed data that becomes in the brain essentially nouns: Floor, rock, water, etc. Early on the data on the nouns is quite crude, but later with more experience gets refined. E.g., a kitty cat quickly learns that floors are solid to stand and run on, and some are shaky and might result in a fall. Then with more input and experience, some
verbs are combined with some of the nouns. The strength of the combining is mostly just from experience; yes, we could write out some simple strength updating algebra. But to be cautious, the learning is deliberately slow: E.g., not everything round on the floor is good to eat.
There is a continual process to simplify this data, i.e., a form of data compression, into causality, e.g., learn about gravity. The learning is good enough to identify the concept of gravity as the cause that makes things fall and to reject irrelevant data like just what are falling from, a table, a window seat, the top of a BBQ pit, a tree limb, the second floor landing, etc. Also reject night, day, hot, cold, and other irrelevant variables -- that's smarter than current multi-variate curve fitting that has a tough time appraising what variables are likely irrelevant.
If I were going to program AI, that would be the framework I would use. I regard the learning as close to a bootstrap operation -- the first learning is very simple and crude but permits gathering more data, refining that learning, and doing more learning. To get some guesses on the details, watch various baby animals and humans as they learn.
I see no real role for math or anything I've heard of in current ML/AI, and I don't think it's much like rules in expert systems. And my guess is that the amount of memory needed is shockingly small and the basic processing, surprisingly simple.
My view is that for high end applications of computing now, drawing from, building on, Çinlar or Bertsekas is about the best we can do and, due to current computing and the Internet, suddenly terrific.
But my view of something real in AI, say, as good as a kitty cat, ..., human will much more directly use very different approaches; that if in the basic core programming there is some math in there, then it will be darned simple.
So, my current view is that the good approaches to such AI will make direct use of little or no pure or applied math. Instead, my guess is that animal ... human intelligence is just some dared clever programming, rediscovered and re-refined so far many times here on earth. From the many times, my guess is that there is basically one quite simple way to do it.
My guess is that the sensory inputs, first, feed data that becomes in the brain essentially nouns: Floor, rock, water, etc. Early on the data on the nouns is quite crude, but later with more experience gets refined. E.g., a kitty cat quickly learns that floors are solid to stand and run on, and some are shaky and might result in a fall. Then with more input and experience, some verbs are combined with some of the nouns. The strength of the combining is mostly just from experience; yes, we could write out some simple strength updating algebra. But to be cautious, the learning is deliberately slow: E.g., not everything round on the floor is good to eat.
There is a continual process to simplify this data, i.e., a form of data compression, into causality, e.g., learn about gravity. The learning is good enough to identify the concept of gravity as the cause that makes things fall and to reject irrelevant data like just what are falling from, a table, a window seat, the top of a BBQ pit, a tree limb, the second floor landing, etc. Also reject night, day, hot, cold, and other irrelevant variables -- that's smarter than current multi-variate curve fitting that has a tough time appraising what variables are likely irrelevant.
If I were going to program AI, that would be the framework I would use. I regard the learning as close to a bootstrap operation -- the first learning is very simple and crude but permits gathering more data, refining that learning, and doing more learning. To get some guesses on the details, watch various baby animals and humans as they learn.
I see no real role for math or anything I've heard of in current ML/AI, and I don't think it's much like rules in expert systems. And my guess is that the amount of memory needed is shockingly small and the basic processing, surprisingly simple.