Hacker News new | past | comments | ask | show | jobs | submit login

Every time someone analyzes an AI system, they invariably conclude that it isn't really AI, but rather just a complicated system of different, strung-together technologies.

At what point do systems of sophisticated text-to-speech and grammar analysis technologies actually become AI?




There is a body of work in cognitive science that suggests that intelligence is what emerges from communication between unintelligent agents.

Marvin Minsky's book "The Society of Mind" is an older but pretty complete treatise of this idea.

http://en.wikipedia.org/wiki/Society_of_Mind

"In a step-by-step process, Minsky constructs a model of human intelligence which is built up from the interactions of simple parts called agents, which are themselves mindless. He describes the postulated interactions as constituting a "society of mind", hence the title."

I don't think that consciousness or intelligence is merely binary. There can be and are varying degrees of intelligence that show up all over nature. Dolphins and primates are easy to point to as consciousness that probably is most similar to our own. Dogs, cats, wolves; they all have varying degrees of what we consider intelligence.

To take this even further i'll pose this thought-adventure. Is a single ant conscious? Is a single bee conscious? Hive systems at least appear to have a kind of intelligence. There are termite colonies that have reasonably sophisticated air conditioning systems. Are the chemical and physical messages sent back and forth between insect agents all that different from the communication between subsystems in our brains or the individual components of siri's architecture?

All of this is very theoretical, but at least a fine thought adventure for a friday morning.


> but rather just a complicated system of different, strung-together technologies.

Sounds like the human brain to me. Ancient parts connected to more modern parts. Modules for specific tasks. Not any kind of monolithic entity, but just this mish-mash of elements that kinda sorta works, especially if we ignore the unpatched bugs of irrationality, superstition, mental illness, common logical fallacies, emotional reasoning, etc.


When they can feel emotions.

The one place where AI actually is a term of art -- gaming -- is also uniquely distinguished in that for agents both human- and computer-controlled, the goals are definite and the number of choices of action limited.

Out here in the real world, things are a lot more fuzzy and complicated. Accordingly we are not willing to give a bot AI status until it can demonstrate competence at "real world" goals (presumably including having relationships with other sapient beings in the world).


What do emotions have to do with intelligence?


IMO, if a system can make intelligent decisions based on the input, then it's intelligent, since it's man made it's also artificial.


Cool.

What's an "intelligent decision"?


I'm afraid I can't provide a definitive definition, I'm not even sure if there are one that everybody would readily agree upon. But what I had in mind was an understanding of intentions or meaning from context rather than having to explicitly be told what to do.


"You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!" John von Neumann, 1948


I'm not sure what your point is, I gave my opinion of when a system of this kind could be said to have AI.


wether or not you think it constitutes "artificial intelligence", i think it would be silly to argue that siri was not making "intelligent decisions".


when they beat a turing test.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: