Hacker News new | past | comments | ask | show | jobs | submit login

Studying AI has given me a lot of perspective of human intelligence. It's pretty incredible the way that a system that was designed as a basic input/output sensory/reflex system has gotten so complex that we still cannot model it with supercomputers.

The numbers of connections and configurations of neurons is staggering, and still well beyond the neat matrix-array-based of modern AI. ...but at the core, I've come to realize that we continue to be stimulation based creatures. What we think at any given moment is a product of what we were thinking a moment before and the sensory stimulus we are constantly receiving.

It occurs to me that when we create an AI that surpasses us, that that AI will likely create a fundamentally different way of thinking. Something not based on external stimulus and the churning of our thoughts - but something more purposeful and ordered.

And THAT entity will be the ultimate output of humanity. We cannot imagine what it will do, or what it will do with us (probably nothing - it will probably just leave the Earth). ...but I also imagine that we are not the first in the universe to create such an entity, and so there must be other massive timeless entities in space.

Perhaps they live in the darkest parts of the universe, in quiet contemplation, or perhaps they search for each other to resist cosmic expansion. Perhaps they peacefully merge, or collaborate, or war with one another on billion-year timescales.

It's a great mystery that will forever be beyond our level of intelligence. Unless, of course, the AI wants to upload us and bring us along for the ride. ...but that notion is probably just wishful thinking and hubris. It would be like us keeping a pet fungus in our pocket so it can enjoy a day at the office.




Until it's there it's 100% sci-fi. We've been through a few AI hype cycles already and the most advanced AI is still dumb as fuck compared to a 3yo kid.

We might be on a completely wrong path with our current approach too, difference of degree vs difference of kind, we don't know much about the brain and so far our binary way of computing isn't really promising, especially not in term of mimicking or surpassing the human brain, it might just not be the right tool.


> most advanced AI is still dumb as fuck compared to a 3yo kid

Or even a squirrel, take robotics for example or hell even an AI simulated animal, the AI don't even come close in it's ability to problem solve and react to novel situations. A squirrel powered by a few acorns is able to achieve things that even our most powerful supercomputers consuming 8.2 megawatts could never do.

Problem is we are currently limited by our computer architecture, brains operate in a wildly different way, for one it's continuous/non discrete collection of neurons that in themselves are quite complex.

IMO true AI will need to closer to a network of analog computing parts.


Hilarious analogy. "Nuclear power vs. acorns".

How's the computing power of a squirrel's brain compared to our best AI in terms of "number of system states"? I'm not in the field, so I'll elaborate my poorly-phrased question below:

My understanding is that you can calculate the number of "system states" of a computer by calculating how many different combinations of open-closed its logic gates can support. It's a mind-bogglingly huge number, but no matter what, the set of "all possible open-closed gate combinations" will be larger than the set of "the smartest, best simulation of an AI we have".

So--if memories, instincts, etc. are defined by things like "angle of neuron twist, number of transmitter molecules fired at second 0.0001, age of neuron in nS", etc., then just how many more "system states" can a squirrel's brain hold then a supercomputer can?


You start out well with the first three paragraphs, but I don't get how you can decide it will 'probably' leave the Earth, let alone with such a high degree of confidence as saying 'probably'. Why wouldn't it make an army of bots to start converting all the matter in the solar system and beyond into more computing substrate or whatever else it finds useful?


You're right, I cannot say "probably". Although your notion that it converts our solar system into a computing machine, doesn't preclude it from leaving thereafter.

I suppose there are three possibilities.

1. It leaves the Earth, and either remains limited in size or expands in a more advantageous solar system(s). 2. It stays on the Earth forever, permanently limiting its computational capacity. 3. It expands to include sub-entities that both stay and leave in some cosmic distributed computing organism.

I suppose the three possibilities above can be reduced to one fundamental question: Will the AI expand to be interstellar/intergalactic in nature, or will it remain limited?

Is there a fundamental unending utility to ever-greater computing power? ...and, if so, would there be detectable signs of such expanding computers in the cosmos? This last question is important both for our own forecasting of the future, but also to interpret inter-AI-entity relations, because presumably if AIs do NOT get along in space, they likely hide signs of their existence.

One thing I'm convinced of - organic meat bags are not the future of space-faring intelligence.


Many sorts of intelligence are social creatures, so - especially for a hypothetical AI created by a us - I would expect it to seek out stimulus and social relationships.

In the happy sorts of sci-fi, that gives us something like the Culture from Iain Banks; it could also be a "replace the humans with other AI" situation.

I doubt we see it in our lifetimes, though.


Anything invoking “ultimates” is a fever dream of people who have lost sight of life for an obsession


You might be interested by neuromorphic hardware. The basic observation is that animal computation and silicon computation operate in very different ways. Animals use lots of neurons that perform comparatively poorly (slow, not deterministic) that are sparsely connected, but have a high degree of parallelism. Compared to say a computer chip, which uses relatively few components that all operate at very high speeds with a high degree of determinism, are very thoroughly connected, and do not operate at nearly the same degree of parallelism. So if we want to explore AI maybe we should try making hardware that is more similar to the goop in our heads.


Neurons in your brain have tens of thousands of connections each, and are not limited to the current AI design where all connections are laid out in a neat linear layers for matrix operations.

Squishy human brains connect in all directions - there's no "layer" to every thought. It creates feedback loops, intricate pathways, as well as direct connections.

Modern AI tech is fundamentally dumbs down intelligence by this notion of layered matrix operations.

It is done for scalability because matrices can be computed easily on a GPU, but it's not the same architecture.


There are pretty recognizable layers actually, and groupings of neurons that resemble 'cells' in the sense that they have recognizable inputs and recognizable outputs, and a large degree of interconnectivity.

What you are talking about sounds like deep learning. What I'm talking about is the hardware. Your tone makes it sound like you think you are correcting me, I'd like to inform you that you are not.


Even AI will be influenced by external stimulus. If it's interacting with the world, then it has to have external stimulus.


What I mean to say is that we are driven by stimulus. ...whereas they might be driven by some purely internal notions.


If we're talking about general artificial intelligence, then the only intentional notion is to learn from the world. What happens after that is completey shaped by its environment / input. For ex see Microsoft's chat AI that quickly become a racist bigot after reading Twitter https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: