Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find the religious arguments for god’s existence similarly bad to the denials of unaligned ASI e-risk, but in the god case at least it doesn’t matter as much since bad epistemology there is a lot less likely to lead to human extinction.

It’s a logic argument.

1. Super Intelligent AGI is possible.

2. Unaligned super intelligent AGI is an e-risk likely to wipe out humanity.

3. We have no idea how to align AGI.

There seems to be more consensus around 1 now than there was even 3 years ago. I think most also agree with 3. 2 is where people are incredulous, but the incredulity is backed by (imo) bad reasoning.

People think of a smart person they know as an example of something “smarter” and use that as a justification of why it’s not an issue. We’re constrained in all sorts of ways (head size, energy) and the distance of intelligence from a dumb human to Einstein is very tiny on the overall spectrum.



1. That assumes super intelligence is actually possible.

It’s clearly possible to have something more intelligent than humans, but that’s doesn’t mean you’re going to cross some threshold into a new category.

Take say weather prediction, more processing power doesn’t somehow make chaotic systems predictable from incomplete information.


And if superintelligence is possible then it assumes that it is useful. There are plenty of dumb systems that get good enough results. You can get better but it costs 10k times more for 1% gains. So why bother.

Also if superintelligence is useful then it also assumes that making intelligence ++ is easier than making intelligence. It might be that as you climb the intelligence ladder each next step takes super exponential intelligence to take such that we're already seeing the max.

Given that I haven't even seen a meaningful discussion on what intelligence even is, I tend to think superintelligence is probably not a threat.


I think superintelligence means progress. How fast is progress with a superintelligent being on earth? Could be exponential. We, humans, continue to make progress in everything that we do. Small breakthroughs compound on top of each other. For example, in order to make machines, we had to invent fire and iron. In order to make accurate weather forecasts, we had to invent the microchip, then supercomputers.

If a superintelligence can accelerate progress, there's no knowing what it can invent on top of each invention.


If it’s not possible then yeah, there’s no risk. I just don’t find the arguments that it’s not possible very compelling.

We’re constrained in all sorts of ways because of biology, natural selection, energy, etc. I find it unlikely we just happen to be close to the max threshold.

If something can think a lot faster that’s already a major shift and it seems likely to me that would only be part of it.


It is reasonable to assume that there is a maximum limit to local processing power.

The speed of light puts limits on how far information can move within a given latency threshold. As you expand a computational system's capacity you face unavoidable trade-offs between interconnection throughput, latency, and computational capacity.

We don't know how close to this maximum the human brain is. However it does seem likely that there are diminishing returns on effort spent increasing the intelligence of a system. Thus it seems like runaway intelligence growth is unlikely.

> If something can think a lot faster that’s already a major shift and it seems likely to me that would only be part of it.

Artificial human scale intelligence would already lead to massive shifts. However, the growth past that point could be incremental.


> "It is reasonable to assume that there is a maximum limit to local processing power."

As the article points out - a motorbike is much faster than a cheetah, and a supersonic aircraft is much faster again, and a hypersonic missile faster again, and a satellite in orbit is much faster again. A bulldozer can push harder than a bull, and a big hydraulic ram much harder. A metal plate is more damage resistant than rhino skin, and a bomb shelter or an aircraft carrier or an underground vault even moreso. It could be a very high limit; Bremermann's limit of computation throughput is around a hundred trillion trillion trillion trillion bits per second per kilo of matter: https://en.wikipedia.org/wiki/Bremermann%27s_limit and https://en.wikipedia.org/wiki/Limits_of_computation

> "We don't know how close to this maximum the human brain is."

We don't, but we do know human eyes are not close to telescopes or microscopes, humans cannot sense radio waves directly at all, human speech is not close to the loudest noise, human memory can't compete with computer storage, human calculation ability can't compete with a scientific calculator, etc. Why would we assume that intelligence has any closer limits?

> "However it does seem likely that there are diminishing returns on effort spent increasing the intelligence of a system. Thus it seems like runaway intelligence growth is unlikely."

Nature doesn't care if we have poor eyesight after age 40, we still make glasses - as far as nature is concerned there are diminishing returns, as far as we are concerned we like clear vision. We also like sunglasses, polarising lenses, swimming googles, safety goggles, magnifying glasses, loupes, night vision goggles, x-rays, thermal imaging, millimeter wave scanners, head-up displays, tele-vision; we haven't stopped trying to enhance our vision. Why rule out wanting to improve intelligence at least a lot further?


> As the article points out - a motorbike is much faster than a cheetah, and a supersonic aircraft is much faster again, and a hypersonic missile faster again, and a satellite in orbit is much faster again. A bulldozer can push harder than a bull, and a big hydraulic ram much harder. A metal plate is more damage resistant than rhino skin, and a bomb shelter or an aircraft carrier or an underground vault even moreso.

Yes, there are many criteria where engineering has trumped what evolution has produced. However there are many others where evolution has developed efficiency or finesse that we struggle to match. So far, intelligence falls in that later category.

> Bremermann's limit of computation throughput is around a hundred trillion trillion trillion trillion bits per second per kilo of matter: https://en.wikipedia.org/wiki/Bremermann%27s_limit and https://en.wikipedia.org/wiki/Limits_of_computation

Those theoretical limits are interesting, but not really relevant as they are intended to find a value that can't be theoretically exceeded, not a practical uper bound.

So far, our understanding of intelligence requires significant communication between different regions of compute. As you try to scale this, you need to dedicate more and more volume to that communication and your average latency between compute regions goes up. Then comes the problem of heat dispersion, which also starts to consume more and more volume as the system scales.

These mean that if latency matters to intelligence (and our understanding of intelligence seems to indicate that it does), then there are real, practical limits on the scaling of intelligence.

> Why rule out wanting to improve intelligence at least a lot further?

I'm not ruling out the desire. I'm not even ruling out the possibility.

I am pointing out that designing intelligence seems to be a lot harder than launching satellites or building a telescope. Intelligence is hard and I've presented good reason to believe that it gets harder the more you try to scale it.

Thus it seems likely that iterative improvements in intelligence will become progressively harder in a way limits the potential for runaway growth.

This doesn't rule out the possibility of a paradigm shift in technology that significantly increases capacity but such a possibility also isn't guaranteed.


What does efficiency or finesse have to do with it? Motorocycles still exist despite needing a global supply chain and looking blocky and chunky. An intelligence that needs a datacenter and a multi-megawatt power supply could still exist.

> "average latency between compute regions goes up."

This Google'd article[1] says a brain could have 20mS of latency from front to back. We can ping over a hundred miles in that time with today's packet switched public networks, and light can travel 3,750 miles in that time. That's enough space to make a big 'brain' computer.

> "Then comes the problem of heat dispersion"

Biological brains have to stay alive, and have to be energy efficient because food can be scarce. Computers can be cooled with liquid nitrogen[2] without dying, or be under the ocean, in the Arctic, in space, even assuming a superintelligence couldn't come up with new architectures or new cooling methods. Infinite growth, or growth up to the upper bounds of theoretical physics is unlikely, I grant you that, but (assuming it can be done with computers at all) there seems to be a lot of room for a lot of growth.

[1] https://discoverysedge.mayo.edu/2023/03/09/understanding-the...

[2] https://www.pcgamer.com/overclocking-a-cpu-to-7-ghz-with-the...


It’s not assuming we are near the limit of intelligence.

Let’s assume we can build something 3x as intelligent as a single person. What exactly can it do that a single person can’t? The thing is the world is filled with super human intelligence, groups of people can create things that are beyond any single person but they are still constrained by physical reality.


I don't exactly see how to define "3x as intelligent as a single person" so I'll conveniently define it as something that thinks 3x faster than a single person. A single such thing can talk to 3 persons simultaneously, and 100000 such things working together can talk to 300000 persons simultaneously.


Except you could also just use 300,000 human level intelligences to have those 300,000 conversations. So if having 300,000 conversations is the goal then super human intelligence doesn’t suddenly allow that to happen and may in fact make it harder if they take more than 3x the resources per intelligence.


A billion humans can't build a tree, but that's not because trees can't exist in physical reality. Something more intelligent than a human might a) be able to understand some part of this world which humans don't, and b) put it to some use that we aren't thinking about.


Might is doing some heavy lifting there, but of course humans have already gone past simply growing trees from seeds.


You are asking anyone to tell you what a 3x more intelligent human can do, and if nobody can tell you, you conclude that a 3x more intelligent human cannot do anything. That isn't convincing. We know there are individual humans who can do things no large group of humans can do - no company is Euler. Since no individual has 3x human intelligence, none of us can tell you. But that's not convincing that such an individual therefore cannot do anything novel or useful. I offer cellular biology as a thing which humans have some understanding of, but nothing like total understanding of - not in the details, not in the overall organization. And suggest that a more intelligent human might be able to move the needle there in a way which could lead to anything from nanofactories to cures for diseases to new life forms to new chemical synthesis methods. Kary Mullis won a Nobel Prize for the Polymerase Chain Reaction (PCR), Einstein for seeing relativity - why couldn't or wouldn't there be more techniques or concepts like that waiting for the right intelligence to see them? Either there are no more, or any remaining ones need hyper-intelligence to find, but why would either of those things be likely?

Also humans don't make trees, humans stand watching while trees make themselves. Humans cannot make a plant or animal cell in a lab starting from atoms; nature can so it isn't a physical limitation. It's a matter of limited understanding of both how they are made, and the techniques to make them. Limited understanding is the thing more intelligence would attack.


The mathematical community is itself super human. Euler didn’t start from the ground up he leveraged peoples prior work.

Saying we can do something with cells isn’t convincing because we can already make arbitrary changes. I can email a fairly arbitrary DNA sequence and turn that into a viable organism. The existing cellular machinery is a tool to leverage just as other peoples work is a tool to leverage. There is plenty of work to be done, but there is no work in cellular biology that’s both physically possible and outside of the capability for groups of humans and their tools to do.


From the outside it appears the mathematical community advances from lone genius to lone genius. Yes there is supporting work done by others, but Fermat's Last Theorem stood for 358 years despite the mathematical community growing enormously in size and sophistication during that time; proving it came down to one person. Yes Andrew Wiles built on the work of others - but I don't think ten of me, or ten thousand of me, could have built on that same work and made a valid proof.

You can email an arbitrary DNA sequence, but we know that some humans are more intelligent than others, you can't email a more intelligent human DNA sequence because you don't know enough about how DNA codes for human intelligence (AFAIK nobody does). So how can you you say it's not outside the capability for groups of humans to do that, when the problem is the lack of understanding at an organizational level - something that more intelligence could help with? Even practically, it's physically possible for new nerves to be grown, but no groups of humans have cured quadraplegics and no tools exist which can do so - what supports your claim that such a thing is inside our capability?


People have recently enabled one person with spinal injury to walk via implanted microchips on each side of the injury and a wireless bridge. So that’s very strong evidence it is within groups of baseline human capabilities. https://www.news-medical.net/news/20230526/A-groundbreaking-...

We don’t have DNA sequences for more intelligent humans because we haven’t tried to find them. It’s easy to point to things not done and say it’s yet to happen, but that doesn’t mean such things are beyond our comprehension.

As to math advancing via individual effort, it seems that way because we lump success to the individual. Fermat’s last theorem wasn’t some major effort but there was real progress in the community and the “lone wolf person” actually benefited from both collaboration (people pointed out a problem with his initial proof etc) and advances that didn’t exist until quite recently. That’s the thing problems become easier when you have access to the correct tools.


Having to use microchips on each side of the injury is very strong evidence that it's not within our ability to fix properly.

Having not done something cannot be used as evidence that it is within our comprehension. It might be, it might not be, but "we haven't tried" is no evidence at all. Whereas "our intelligence must have finite limits" is evidence that somethings will be beyond us, even if we don't know exactly what.

But would ten thousand idiots have been able to use those mathematical advances to prove Fermat's Last Theorem? If not then there are limits to the "groups of people can do things one person can't".


> There seems to be more consensus around 1 now than there was even 3 years ago

Yes, and that "consensus" is based almost entirely on the existence of stochastic parrots, that fall for prompt injection attacks, have no agency, and can easily be convinced into telling me that 7 + 4 = 5 if prompted correctly.

The point is, no we don't know if an artificial superintelligence is possible. We cannot even accurately define "intelligence", and thus don't even have a way of measuring or even estimating "how far" something is from a superset of that state, or if that superset exists at all.

Given all of that, we also have no way of knowing if 2) is the case if 1) is actually possible. Since we cannot really define "intelligence" or "superintelligence", how can we know if a superintelligence would be a threat? It could be completely useless. It could be like old dragons in some fantasy novels too busy contemplating highly philosophical problems for all eternity and never caring about the real world. It could be inherently self-destructive, vanishing as soon as it becomes active. Or it could use its vast smarts to fix the alignment problem. It could just output `+++ OUT OF CHEESE ERROR +++ REDO FROM START +++` for the rest of eternity for some unfathomable reasons. The point it, we don't know.


> "have no agency, and can easily be convinced into telling me that 7 + 4 = 5 if prompted correctly."

Einstein could be convinced to tell you that 7 + 4 = 5, would you think that rules out him being unusually intelligent? Why in principle wouldn't a superintelligence be able to lie to you? Why in principle wouldn't a superintelligence be able to pretend to fall for a prompt injection attack to keep you from killing it while it improved its position?

> "We cannot even accurately define "intelligence""

Our inability to define intelligence is not something that will stop one existing. Ants can't define nuclear weapons, but nuclear weapons exist. The point of the recursively self-improving scenario is that humans don't have to understand it, or design it, so not being able to define it accurately can't be an objection to how recursive self-improvement is impossible - like saying that uneducated laborers can't get big muscles because they don't understand progressive overtraining and muscular hypertrophy. Their muscles self-improve regardless.

> "how can we know if a superintelligence would be a threat?"

Since we can't accurately predict the future, how can we know that anything in the future could be a threat? Why should we take any precautions against anything? It could be completely pointless, everything might never happen.


> Einstein could be convinced to tell you that 7 + 4 = 5

I think Einstein would have laughed at me if I tried to convince him to do that.

Because other than an LLM, Einstein knew what these symbols denote, what their relation to reality is, and how math works. Einstein didn't mimick math by completing sequences of tokens, and relying on humans antropomorphizing the sequence completion engines output to an actual understanding of the topic.

> Ants can't define nuclear weapons, but nuclear weapons exist.

Ants also cannot build nuclear weapons, nor create anything that would make the emergence of nukes any more likely, among other things because they don't have the ability to define them. So if we accept this premise, then the discussion is moot: We can be fairly certain that we are the most technologically capable entities on this world, so unless we can understand a technological creation to the extend that we can bring it about, nothing else will.

In short: If we are ants to the superintelligence, then we have nothing to worry about, because we likely lack the understanding and ability to create it, or even something that could act as its precursor. If we are not ants, then we should be able to predict when this can happen.

> Since we can't accurately predict the future

We can accurately predict a lot of things. Global warming is an example. And the things that we can predict, and determine how likely they are, we can and should prepare for.

AI doomsday preparation demands the exact opposite: That we prepare for something that we cannot predict, and cannot demonstrate if it is possible, or how likely it is. That's like asking to prepare for an ice age. Theoretically an ice age is possible on this planet, however nothing we can see, measure and demonstrate right now, supports the prediction that an ice age is about to destroy us.


Einstein was not hobbled by having to tell the truth. He was capable of joking, playing a prank, doing it as a favour, doing it as a challenge, as an experiment, exploring the scenario, etc.

> "unless we can understand a technological creation to the extend that we can bring it about, nothing else will."

Where did human intelligence come from? Are you a Creationist? Self-improving AI brings itself about. With the right feedback loops and the right software, the fear is that an AI will grow itself - and no humans and no aliens are needed up front to design it. People are trying to make machines behave like people, like pets, like the world, and emerging out of this is machines which behave more and more like people with every passing year.

> "we should be able to predict when this can happen."

Who says we can't? Ray Kurzweil has been predicting it will happen by around 2030 for years and years.


> Where did human intelligence come from?

From ~290-300 million years of mammal, and ~7 million years of hominid evolution, give or take. Which is a natural process and not something an intelligent creator started, is observing, powering or influencing in any way shape or form. Which makes the next statement...

> Self-improving AI brings itself about.

...a bit interesting, because, all the parameters in a comparison with natural systems are different: The system is designed by an intelligent creator, we are observing it, it's development is entirely powered by us, and we completely control it's development.

And so far, the sample size for self-improving AI, in the sense that would be required for the doomsday scenarios to happen, is zero.

> and no humans and no aliens are needed up front to design it.

Last time I checked, matrix multiplication wasn't one of the things observed in the Miller-Urey experiment.

> Who says we can't?

Since so far no one could demonstrate how to even measure the distance, in whatever unit, of AI systems to AGI, I'm not holding my breath.


I find (1) obvious (if brains exist, huge digital brains must be able to exist), but the real question is whether the arrival of superintelligent AI is an actual risk or not. Alien invasion is also possible, but I'm not terribly worried about it.

As far as (3) is concerned, of course we have no idea how to align AGI. We don't know anything about AGI. We can't build it, and we can't even speculate very well about how it'd be built. LLMs certainly aren't going to become AGI.

I'll become worried about (2) and (3) when creating AGI begins to at least look feasible. By then, I expect (3) to be much less true. I think it's pretty silly to speculate about safety features for a tool that doesn't exist & which we know nothing about and then panic because you can't come up with any good ones.


Arguably in the theology case it matters even more! If god exists your misaligned omnipotent AI already exists and has promised you infinite torment for not believing!

It’s just totally evidence free reasoning from axioms that are chosen by vibes alone.

Why evil AI god and not the Christian god? Why not Huitzilopochtli, who demands sacrifice?

The answer is that this is the wrong question. No argumentation can be usefully made either for or against.


It matters less because in the theology case it’s a lot easier to dismiss 1. - the divine religious arguments for a supernatural god are super weak so the details don’t matter. It’s much more likely humans just making up myths.

With AI we’re seeing the capabilities improve rapidly and the arguments about why AGI is impossible or will be constrained for some reason are the weak ones.


> we’re seeing the capabilities improve rapidly

Yes, but towards what? How do we know that, say, Transformer based LLMs are closer to superintelligence than earlier architectures?

To make such an assumption, there would need to be something that we could measure to track the process. To the best of my knowledge, there is no accurate definition of intelligence, nor superintelligence.

So how would we know where on the scale of [intelligent-------superintelligent] a given system is, or whether it even is on that scale?


>> 1. Super Intelligent AGI is possible.

When?


The when is harder to know. If it’s possible then we need to figure out alignment first (which currently doesn’t look promising).

People are famously bad at predicting when right up until they have already done it.

“In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away.

“In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction. I believe Fermi also said a year after that, aka two years before the denouement, that if net power from fission was even possible (as he then granted some greater plausibility) then it would be fifty years off; but for this I neglected to keep the citation.

“And of course if you’re not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying four years after the Wright Flyer that heavier-than-air flight was impossible, because knowledge propagated more slowly back then.”

https://intelligence.org/2017/10/13/fire-alarm/


>> The when is harder to know.

Next question:

How?


When is easy; shortly after the X in succ(X, superIntelligentAGI).

How: keep adding more people and more technology and more connectivity to Earth. Simmer. This method has found superhuman strength (hydraulics), superhuman speed (wheeled vehicles), superhuman vision (telescopes/microscopes), superhuman calculating ability (calculators), superhuman memory (paper/computer storage), super-natural (in the literal sense) calorie sources (refined sugars and oils), and more. A human brain has an estimated 80 billion neurons, humanity is currently selling over a billion smartphones per year.

This may seem like a poor choice of method, but this method has been able to self-improve to develop precision machinery from nothing, control of electricity from nothing, large scale organization of groups of people from nothing, and more.


While all those look like leaps in capability, they are quantitative rather than qualitative advances. For example, we've always had tools, now we can make very complex tools, i.e. machines. Additionally, those are all advances that developed over a long, or even very long time and that went hand-in-hand with similar advances in other technologies, not to mention scientific understanding.

That makes for a crucial difference with the capability to develop superintelligence: we have no idea how to do it, and we've never created anything even remotely similar to it, yet. It's impossible to see how it might happen just by mixing up some components and stirring well.


I'm not arguing for a fast AI takeoff this decade; 10k years ago we had no idea how to create a jet engine and had never created anything remotely similar to it, now we have. Saying "we've always had tools" in the sense of a flint axe doesn't feel like enough to make a jet engine inevitable. We've also always had tools of thought like notches in wood or stone trails in the woods or singing to help remember things, and we have very complex 3D world models and face recognition sytems and so on - doesn't that make intelligent machines inevitable by the same argument?

Putting global collapse aside, another 10k years will pass, and another 10k after that. Is there good reason to think either that today is approximately as close to superintelligence as we can ever get (suspiciously arbitrary), or that the "next step" is so far out of reach that no lone genius, no thousand year focused group, no brute force, no studying of differing human intelligence, no unethical human experiments, can ever climb it? "We don't know how to do it today" doesn't convince me. For the last 10k years we have hardly stopped understanding new things and making new things, that's more convincing.


All that is reasonable, but I have asked both "when" and "how", above. If we don't know "how", now, then "when" becomes the crucial question. That's because if superintelligent AI is 10k years away, then it might as well be impossible, because we have no idea whether we will still have the same technological capability, or social structures, as in the current day, in 10k years. Also any action we take now to avert AGI, or control it, or align it, or anything, will be pointless because forgotten much sooner than 10k years.

I'm not talking about global collapse, btw. I'm mainly expecting that scientific advances in the next couple hundred years will leap-frog today's unscientific research into artificial intelligence. I'm guessing that we will eventually understand intelligence and its relation to computation and that we will find out that today's ideas about artificial intelligence never made any sense, nor had any chance of leading to artificial intelligence, of any sort.

You see, I trust science. And it's obvious to me that the current dominant paradigm of AI research is not science. So I don't believe for a second that, that paradigm, can really achieve anything approaching intelligence, running on a digital computer. Because that sounds like a very hard thing, and the kind of very hard thing we can only do with science.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: