Hacker News new | past | comments | ask | show | jobs | submit login

Humans are proof that machine intelligence can be improved quite a bit. We are just complicated machines, no?



Yes. But two things.

1. We don't know all the mechanisms that a brain employs to achieve intelligence. We see billions of interconnected neurons and we assumes "Yea, this might be generating intelligence".

2. We don't know if we are already at some fundamental limits of intelligence. For example, you can see may instances in nature where a pattern emerged that maximizes some sort of efficiency. (Like Honeycomb pattern). So, the end result of this will be that, even if we transfer the process by which our intelligence work to a machine, it will have the same performance as an average human brain...


We're animals who have to worry about surviving and passing our genes on in a variety of social settings.

Machines are the tools we make to aid the above.


And your body and worries are just tools that help your genes reproduce.


And your genes are just tools that help the chemical environment they regulate reproduce.


And the chemical environment is just trying to maximize entropy.


It's almost like that chicken-and-egg scenario!


yes but we're not built on silicon nor were we guided by curated data sets nor did we have a deadline nor does another parallel universe rely on our decisions for potentially life and death outcomes. I do not like the idea of this simple equality and I think it misses the point. We might not get to us with this tech, the model might not be near enough.


I'm specifically talking about the strategies used in Deep / Machine learning to approximate intelligence through probability.


Wait, how is that actually different than what our brains do? From what I know, our cognitive system is built in quite a similar fashion of probabilistic pattern matching with backpropagation, coupled with some "ad-hoc" heuristic subsystems.


Eh no. Our brains and how human mind works are actually very poorly understood. To claim we have a good idea how our cognitive systems work under the hood is an incorrect statement.


there is nothing like backpropagation in the brain, or a probabilistic pattern matcher. there is evidence that a connectionist model is applicable, but learning is not deciphered, and there are aspects of it, like neuronal excitability, local dendritic spiking, oscillations, up and down states etc, which do not translate at all to DL systems. That said, the increasing success of connectionist architecture does point to the conclusion that the brain is also a connectionist machine.


I'm not sure neuroscientists would quite agree with that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: