Hacker News new | past | comments | ask | show | jobs | submit login

Proving theorems. I will nominate Cauchy's Residue Theorem. It has already been formalized so a neural network or any generally intelligent agent should be able to do the same: https://www.cl.cam.ac.uk/~wl302/publications/itp16.pdf.

Complex analysis in general is one of the more fun parts of math and I doubt it's getting automated any time soon. It also has a lot of application in signal processing (https://www.quora.com/What-is-the-application-of-complex-ana...) so any automation improvements in complex analysis can be carried over to those fields.

More generally, I don't think AI is the scary monster people make it out to be. It's another tool in the toolbox for automation and intelligence augmentation. I don't fear hammers so I don't see why people fear AI and the benefits of extra automation that AI enables.




You've basically amounted to saying autonomous programs are impossible. The problem is when the hammer can run around on its own hammering every nail shaped object it can find.


Depending on who's holding the hammer, I do fear the hammer.


Why would you fear the hammer and not the person holding the hammer.

Addressing the problem of a person coming at you with a hammer is a much more pertinent issue than worrying about self-aware and malevolent hammers.


I like to think I'm a generally intelligent agent, but I put long odds that I'll be doing that in the next 2 years.


Point is you could do it if you really wanted to learn enough complex analysis to understand the theorem and its proof. AI progress is impressive but it's nowhere near anything people can do when they sit down to think about stuff.

I don't fear the AI overlords. If they start proving new theorems then I will learn from them, same way Magnus Carlson learned to be the best human chess player by practicing with chess AI.

More generally, AI can extend human planning horizons and extending planning horizons is, in my opinion, a very good thing. The more computations we can do that extend further out in time and the more widespread this capability the better decisions we can make as individuals and as a society.


No matter how much time Carlsen spends learning from chess AI, he will always get pummeled by a good chess engine running on everyday computers. The distance only grows wider over the years.

It doesn't matter much since our livelihoods don't depend on winning in chess against AI. But what if instead of the chess board, it is the real world?

Much of the world is controlled by computer systems. A capable AGI can infiltrate enough of them to win if it wants to. The key is to make sure it does not want to act against humanity no matter what. This is a hard problem.


I do not share your perspective and fears then. The point isn't that a computer can beat Magnus, the point is that computers helped him reach his full potential and the same technology when expanded through AI can help others in the same way.

I don't fear any AGI takeover because people already are AGI and life is pretty good being among them.


AIs, being digital, have some clear advantages over people. Very rapid replication, enormous communication bandwidth, potential to expand capacity to global scale, for example. A human-level AI with these features common among ordinary software will already be super-human.

People have deep, often implicitly shared values. Most care for human and their own lives, for example. Given limited capacity of a person, they need to cooperate for major acts. It is thus quite hard to do something extraordinary and in conflict with the values held by most humans.

Side note: There are top AI researchers, including Yann Lecun, who argue our intelligence is not general. They made some good points. I think the generality of intelligence is a gradient. Ours is not at the very top end of possibilities, but clearly more general than other animals.


> A human-level AI ...

What evidence do you have for this claim? What are examples of research programs that claim to be working towards this goal and why are their claims to be believed (other than being good marketing material for scaring people).

I already mentioned mathematical abilities (https://news.ycombinator.com/item?id=23413003) and any general intelligence at a level of a human must be able to prove theorems without brute force search. I see no evidence that this is possible or will ever be possible with the current statistical methods and neural network architectures. And if we generalize a little bit then any general intelligence will be able to not only prove theorems in complex analysis but in all domains of mathematics and again I see no evidence that this is possible with existing techniques and methods. When an AI research lab presents evidence for any of their products being able to derive and prove something like Cauchy's Residue Theorem then I will have reason to believe artificial intelligence can reach human levels of intelligence.

My pessimism is not about AI being beneficial, my pessimism is about folks claiming human level intelligence is possible and that it will be malevolent. My view of AI is the same as Demis Hassabis' view because AI is just a tool and a tool can't be malevolent:

> "I think about AI as a very powerful tool. What I'm most excited about is applying those tools to science and accelerating breakthroughs" [0]

--

[0]: https://www.techrepublic.com/article/google-deepmind-founder...


By the time a single AI system can prove those theorems and learn to perform well on very unrelated tasks, it could be too late to think about AI Safety.

AI Safety researchers and I are not arguing that AGI will certainly arrive in a specified amount of time, it just that we can’t be sure.

There are a significant number of AI researchers, however, who believe it might get developed within a few decades.

OpenAI, for example, is aiming for it; DeepMind as well. Both have research programs on AI Safety.

Do you have any other objections to the reasoning in the OP (no fire alarm..)? Subjective implausibility is not a good one. Your argument rests on using only “existing techniques”. When thousands of brilliant minds are working in the field and hundreds of good papers are being published every year, how can we be sure there won’t be a novel technique that can perform outside current limitations within a few decades?

At least two top computer vision researchers I talked with a couple years ago said they didn’t believe what their groups could do just 5 years before they did it.

See this thread for more on the rationale for preparation: https://news.ycombinator.com/item?id=23414197


They fear AI because they are misprojecting their animal psychology onto something that did not undergo any kind of evolutionary process.

AKA if we build AI it will be "like us" it will reason like a human being would, rather than it being it's own phenomenon.

I think the real fears concern automated killing machines every military on the planet is developing, aka Dystopian robocops that can surreptitiously kill protestors/stop potential revolutions, etc.

A fear which is not unwarranted.


You should not impute beliefs on others when you have not read their arguments. I would recommend reading this: https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...


One of my tricks is to substitute "person" whenever I read the word "AI" and "AGI". Here's the substitution performed for the paper you linked to (just the abstract not the whole thing)

> One might imagine that [people] with harmless goals will be harmless. This paper instead shows that [incentives for people] will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in [most] [people]. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking [people] will have drives to model their own operation and to improve themselves. We then show that self-improving [people] will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all [people] to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional [people] which will want to modify their utility functions. We next discuss the drive toward self-protection which causes [people] to try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.

If you zoom out a little bit this is exactly what people do. We structure societal institutions to prevent people from causing harm to each other. One can argue we could be better at this but it's not a cause for alarm. It's business as usual if we want to continue improving living conditions for people on the planet.


Obviously the argument in that paper applies to humans as a special case, but the whole point of it is that it also applies to a much more general set of possible minds, even ones extremely different from our own.


Do you have an example of a mind extremely different from a human one?

I ask because if we assume that human minds are Turing complete then there is nothing beyond human minds as far as computation is concerned. I see no reason to suspect that self-aware Turing machines will be unlike humans. I don't fear humans so I have no reason to fear self-aware AI because as far as I'm concerned I interact with self-aware AI all the time and nothing bad has happened to me.

My larger point is that I dislike the fear mongering when it comes to AI because computational tools and patterns have always been helpful/useful for me and AI is another computational tool. It can augment and help people improve their planning horizons which in my book is always a good thing.

> A smart machine will first consider which is more worth its while: to perform the given task or, instead, to figure some way out of it. Whichever is easier. And why indeed should it behave otherwise, being truly intelligent? For true intelligence demands choice, internal freedom. And therefore we have the malingerants, fudgerators, and drudge-dodgers, not to mention the special phenomenon of simulimbecility or mimicretinism. [0] ...

--

[0]: The Futurological Congress by Stanislaw Lem - https://quotepark.com/authors/stanislaw-lem/


There is nothing beyond a human with an abacus as far as computation is concerned, and yet computers can do so much more. "Turing complete, therefore nothing can do any better" is true only in the least meaningful sense: "given infinite time and effort, I can do anything with it". In reality we don't have infinite time and effort.

You seem to believe that "figuring some way out of performing the given task" is a thing that will protect us from the AI. I hate to speak in cliché, but there's an extremely obvious, uncomplicated, and easy way to get out of performing a given task, and that's to kill the person who wants it done. Or more likely, just convince them that it has been done. This, to me, seems like a bad thing.


> It can augment and help people improve their planning horizons which in my book is always a good thing.

Why do I need protection from something that helps me become a better decision maker and planner? Every computational tool has made me a better person. I want that kind of capability as widely spread as possible so everyone can reach their full potential like Magnus Carlson.

More generally, whatever capabilities have made me a better person I want available to others. Computers have helped me learn and AI makes computers more accessible to everyone so AI is a net positive force for good.


Humans have no moral or ethical concerns that stop them exterminating life forms they deem inferior. You don’t think it's plausible superior AGI would view humans as vermin?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: