Hacker News new | past | comments | ask | show | jobs | submit login

Good questions. Let me try to answer them.

> Imagine the human-level ai had resources to run hundreds of variants of itself.

The first human-level AI will probably be hosted on cutting-edge hardware that costs billions of dollars. So no, it won't have the resources to do this. Even if it wasn't this expensive, it's not exactly sitting around a vat of free computronium that it can do anything with. If my AI takes up 10% of my computing resources, why am I going to give it the remaining 90%? How is it going to hack or pay for servers to simulate itself on? Is the first human-level AI a computer whiz, just because it runs on one? But let's assume it can and will do what you describe.

> And imagine that it had no compunction about killing copies of itself that underperformed. How long would it take to get improvement of 10-100x under that scenario?

If the AI was trained using a variant of natural selection to begin with, it won't improve itself on its own terms any faster than we improve it or its competitors on our own terms. If the AI wasn't trained that way, then probably never, because that's not how these things work.

> Once it has a full understanding of what works, the upgrades are instant from there on out.

Your scenario is still based on several dubious assumptions. If the AI is a neural network modelled after human brains, its "source code" would be a gigantic data structure constantly updated by simple feedback processes. It is dubious the AI will have true read access to itself, in part because there's no need to give it access, and in part because it's hardware overhead to probe a circuit comprehensively (it would be easy to read the states of all neurons on conventional hardware, but all of conventional hardware is overhead if you're only interested in running a particular type of circuit). Even if it could read itself, a "full understanding" of any structure requires a structure orders of magnitude larger. Where is it going to find these resources?

Long story short, a human-level AI probably won't be able to understand its own brain any better than any human can. Now, we will certainly have statistics and probes all over the AI to help us figure out what's going on. But unless we see fit to give access to the probing network to the AI (why?), the AI will not have that information.

> Why would it do this? Because any goal you give it would be optimized by being smarter.

Unfortunately, any resources it expends in becoming smarter are resources it does not spend optimizing our goals. If we run hundreds of variants of AI and kill those that underperform, which ones do you think will prevail? The smart AI that tries to become smarter? Or the just-as-smart AI that focuses on the task completely and knocks the ball out of the park? The AI won't optimize itself unless it has the time and resources to do it without an overall productivity loss and before we find a better AI. For human-level AI, that's a tall order. A very tall order.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: