Hacker News new | past | comments | ask | show | jobs | submit login

To play devil's advocate. Let's say in 2030 we can fully simulate a human brain and it works. Let's also assume it runs 10000x faater than wetware (a highly conservative estimate?). That means in about 1 yr it should be able to assimilate as much info and experience as a 30yr old human. After that it could use it's 10000x speed advantage to effeftively have the equivalent of 10000 30yr old hackers looking for exploits in all systems.

I'm not saying that will happen or is even probable but when A.I. Does happen it's not inconceivable it could easily take over everything. I doubt most current state actors have 10k engineers looking for exploits. And, with a.i. that number will only increase as the a.i. is duplicated or expanded.




Let's say in 2030 the aliens invade and it works. Let's also assume they're 10000x more powerful than wetware (a highly conservative estimate?). That means in about 1 yr they should be able to attack us as effectively as 30 years of human war. After that they could use their 10000x power advantage to effeftively have the equivalent of 10000 30yr wars looking to wreck havoc.

I mean at this point you're just making things up...


Honestly, making things up is exactly how we prove if it works or not. For example:

Suppose I can talk into a tiny little device and someone can hear me from miles away.

That sounded like witchcraft at some point but it became a reality. Futuristic thinking needs to suppose lots of crazy-sounding things are possible.


Compare with:

Suppose I can think into a tiny little device and someone can hear me from miles away.

That still sounds like witchcraft. Before one of them gets invented, or at least the basic underlying principles are discovered, how do you tell them apart?


Simulating a brain is not aliens. Or maybe it is an I'm hopelessly naive. Lots of smart people are working on simulating brains and there estimates are that's it's not that far off from being able to simulate an entire brain in a computer, not at the atom level but at least at the functional level of neurons. At the moment there's no reason to believe they're wrong


> The most accurate simulation of the human brain ever has been carried out, but a single second’s worth of activity took one of the world’s largest supercomputers 40 minutes to calculate.

http://www.telegraph.co.uk/technology/10567942/Supercomputer...

The above supercomputer in 2014 was 2400X slower than the human brain. Moore's law is dead so I think your 10000X and 2030 estimates are grossly optimistic.


If you applied your logic to DNA sequencing. The first DNA sequencing took ages and by various estimates at the time it would have taking 100+ years to fully sequence DNA. Fortunately exponential progress doesn't work that way and full DNA sequencing happened orders of magnitude sooner than those pessimistic estimates.

I see no reason to believe A.I. will be any different. But then I don't believe Moore's law is dead. We'll find other ways to extend it. Insert magic here.

http://michaelgalloy.com/wp-content/uploads/2013/06/cpu-vs-g...


But, but, Quantum computing.

Jazz hands


Wetware is unbelievably, ridiculously efficient compared to anything we can make out of silicon and metal. Our most complex artificial neural networks, endless arrays of theoretical neuron-like structures, are much smaller, less complicated, slower, and less energy-efficient than the actual neurons in a brain. Of course, we can make computers as large and energy-consuming as we want, but this limits their potential to escape to the web and doesn't address the speed issue.


General AI like that is not years or decades away. The problem hasn't even been stated clearly yet. AGI is probably a century away or more. It's not a resource problem, it's a problem problem. I attended an AGI conference a couple of years back with the luminaries of AGI attending (held at my alma mater, University of Reykjavík). The consensus was we didn't even know which direction to take.


The same argument can be used the other way. If we don't even know which direction to take, what makes you think that AGI is a century or more away? Say, in 10 years, we better understand the problem we want to solve and the direction to take, what make you think it would take 90 years to solve, versus 20 or 30?

I think we simply have no idea when this could happen, it could be in 20, it could be in 200. But one thing is sure, when it will happen, this will have drastic implications for our society, so why not start thinking about it now, in case it's 20 and not 200?


We should. I was not expounding a policy. I was merely stating the results of a conference on the subject at hand. All of what was discussed in the article is weak AI. Narrow programs delivering narrow features. Social engineering doesn't require strong AI.


"If we don't even know much about our universe, what makes you think that an alien invasion is a century or more away?"

Yet I don't see us losing our heads over the chance of an alien invasion.


Based on the fact that there hasn't been any known alien encouter during human written history, that we haven't found any artifact of such an event, even in a distant past, that a 100ly radius is really tiny at the scale of the galaxy, that we haven't found any sign of life outside earth, and that anyway, if an alien civ is advanced enough to come here and invade us we can't really hope to do anything againts that, there is indeed no need to spend time worrying about that.

Considering the evolution of computing and technology in general in the last 50 years would you consider the two things to be remotely comparable?

I personally don't.


Neither have we experienced a true AI, and none of the gains in the last 50 years have brought us anything near it, only more advanced computing ability and "trick" AI.

We just assume technology will improve exponentially based on an extremely small sample size. Has it never occurred to us that the technology curve may horizontally asymptotic as opposed to exponential?

The ICE was an amazing piece of technology that grew rapidly, from cars to military warplanes, to our lawnmowers. Yet we can not make them much more efficient or powerful without significantly increasing resources and cost. If you judged the potential of the ICE on the growth it had then, we'd be living in an efficiency utopia now.


Based on the fact that there hasn't been any known AI encouter during human written history, that we haven't found any AI artifact of such an event, even in a distant past, .... , and that anyway, if an AI civ is advanced enough to explode exponentially we can't really hope to do anything againts that, there is indeed no need to spend time worrying about that.


AFAICT there's 2 paths to A.I.

Path 1 is understand how the brain organizes info. Recreate those structures with algorithms. This path is 100+ years away

Path 2 is just simulate various kinds of neurons with no understanding of how they represent higher level concepts. This path is < 20 years away by some estimates

You probably believe path #2 is either also 100+ years away or won't work. I happen to think it will work the same way physics simulations mostly work. We write the simulation and then check to see if it matches reality. We don't actually have to understand reality at a high level to make the simulation. We just simulate the low level and then check if the resulting high level results match reality. It certainly seems possible A.I. can be achieved by building only the low-level parts and then watching the high-level result.


I have found myself repeating this paraphrased quote often recently, which is relevant.

"We didn't achieve flight by simulating birds; we needn't worry about achieving ai by simulating brains."


If the Hammeroff-Penrose conjecture is correct then the only simulation possible is whole replication with quantum effects present. The unreasonable effectiveness of neural networks makes that unlikely thou.


So... I dont want/need to think about this cause I'll be long dead? :) On a related note, how optimistic are you about life extending medical technology (which is likely to be accelerated by even minor advances in computing and AI)?


Not what I said. I'm cautiously optimistic about life extending technologies. I'm not sure computing technology and AI will carry the day, I'm more inclined towards the physical sciences.


Why do you believe it will run so much faster than wetware? It would be great to see the math. Also, how much power do you think it would take?


>Let's also assume it runs 10000x faater than wetware (a highly conservative estimate?). That means in about 1 yr it should be able to assimilate as much info and experience as a 30yr old human.

Assuming your assumptions, and assuming it had access to the information, you meant one day, not one year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: