Hacker News new | past | comments | ask | show | jobs | submit login

The problem with this article starts at the very begining. Where is says “The last few years have seen a wave of hysteria about LLMs becoming conscious and then suddenly attempting to kill humanity.”

That is not a fair description of the fears of AGI. The idea is not that LLMs themselves become a threat. It is not like someone will ask Llama one too many times to talk like a pirate and it will climb out of the GPU and strangle humanity.

It is more likely that an AGI will resemble some form of reinforcement learning agent applied by humans on the real world.

There are multiple entities on Earth whose stated goal is to make an AGI. Deep mind, OpenAI, readily comes to mind. There could be others who keep secret about their projects for strategical reasons. (Militaries and secret services of the world.) They can use the success of LLMs to get more funding for their projects, but the AGI need not be otherwise a descendant of LLMs.

This misunderstanding then goes through the whole article. Under the subtitle “Human writing is full of lies that are difficult to disprove theoretically” which only matter if you think that the AGI needs to learn from text. As opposed to conducting its own experiments, or gaining insight into raw sensor data from experiments.




Furthermore, the underlying point of AGI risk is that at no point in the history of forever have we had entities that were more intelligent in all dimensions than a human to contend with. Anything that threatened us we could just push to the limits of their ability to plan then go a bit further to destroy them. And we have humans in every system we've ever built because something has to do the executive decision making and putting a human in was the most cost effective option - which means democratic institutions tend to have an easier time controlling everything.

The threat here is we have no idea at all what happens economically when machines are stronger, faster and just generally better than people. In the long term we might plausibly find an economic equilibrium where humans are not worth feeding. We might find (imo probably will find) that a human-free military is more effective. There is a much greater level of disruption possible here than in the industrial revolution that led ultimately to WWII.


> In the long term we might plausibly find an economic equilibrium where humans are not worth feeding.

This is looking at it completely backward: from the idea that the purpose of humans is the work that they do.

Humans are the ones that make the purpose; therefore, the only logical foundation is the opposite, that the purpose of work is to serve humans.

When you look at it from that perspective, many things become clear, not least of which being that if we no longer need humans to perform the work necessary to create all that we desire, humanity is free to do whatever we like while the machines keep up well-supplied.

This is only a problem if you subscribe to the idea that humans without mandated work are inherently "bad" somehow. ("Idle hands are the Devil's playground", etc)


The people in charge do not always value the humans in their society with less power than themselves. This is why the disabled, the homeless, single mothers, gypsies, asylum seekers, have all been demonised in my home country in my lifetime.

I'd like to be optimistic about a future where we all have UBI and are free from the need to work; I fear the future where the need remains and the opportunity is lost.


> purpose of work is to serve humans

I don’t think that you and I are going to disagree on that.

This is not an argument about philosophical values. This is an argument about incentives and reality.

> humanity is free to do whatever we like while the machines keep up well-supplied

The problem is that the future tends to arrive in an inequal manner. Some group of humans will have more control over the machines which keeps them well-supplied than others. Let’s call these humans rich&powerfull. The rich&powerfull one day going to realise that the machines are keeping them safe, well-fed, and entertained. They might at that point ask why they suffer all the other people gumming up the works around them. They might not immediately order their robots to murder everyone who is not them, but simple plain indifference can do wonders. Maybe they would ask the robots to move along those closest to them. (In the name of security and privacy naturally!) Or maybe they ask the robots to remove those who disturb their views by living in areas visible from the rich&powerfull’s homes. And then imagine what happens when the common rabble get together and does a “democracy” where they try to limit what the rich&powerfull can do. If this so called “democracy” doesn’t own a robot army, or said robot army is leased from the rich&powerfull they might find that they can’t enforce those laws. If the common rabble gets upity and tries to hurt the rich&powerfull with pitchforks that is just further reason to arm more robots and surpress the common folk more. After all one can never be carefull enough. The rich&powerfull might ask themselves why they are spending resources on making robots which feed the common folk, or protect them, or heal them. They don’t turn off those services altogether. They are not monsters! Just neglecting them more and more. After all it does not make their life harder, so why not?

It is not like anyone made a decision that humans are not worth feeding. It is that the rich&powerfull decided whatever strokes their fancy is worth the resources more than feeding the not rich&powerfull.

It doesn’t necessarily means that these rich folks will live surrounded by only robots. They might have a bunch of real human girl/boyfriends, and hanger-ons, and court poets, and whoever they fancy. But these “courts” can be drastically smaller than our current societiess, and much more idiosyncratic.

Now of course you might say this is a bad sci-fy plot And you might be right. There are a few assumptions under it.

One is that it is possible to make machines which can perform all these jobs (manufacturing, agriculture, transport, and security to name the few main ones). We can’t do that as of now. We think we will be able to do all of these with machines one day but we might be wrong. If we are wrong this scenairo won’t ever come.

The other assumption is that control over these machines will be easy to centralise. This future won’t come if everyone can grow a robot army in their own kitchen for example. But i think it is more likely that at least initially these technologies will require expensive factories, and expensive IP to be produced. And those tend to be concentrated in a few hands.

Who knows maybe there are other assumptions too. Have to think more about that.


> we have no idea at all what happens economically when machines are stronger, faster and just generally better than people.

John Henry would like a word. https://en.wikipedia.org/wiki/John_Henry_(folklore)

> In the long term we might plausibly find an economic equilibrium where humans are not worth feeding

https://en.wikipedia.org/wiki/Holodomor

> We might find (imo probably will find) that a human-free military is more effective

The premise of movies from Dr Strangelove to War Games: a military consisting of an array of automatically launched nuclear missiles.

The worry I have is not so much the idea of an AI going entirely rogue against humans as the much more mundane one of it being weaponized by humans against other humans. The desire to do that is so obvious and so strong, whether it's autonomous weapons or trying to replace all art with slop or doing AI-redlining to keep out "undesirables". It's just that that maps onto existing battle lines, which "apolitical" AI bros (both pro and anti "safety") don't want to engage with.

(We all understand that a hypothetical conscious AI would (a) have politics of its own and (b) be fairly unrecognizable to human politics, except being linked to whatever the AI deemed to be its self-interest, yes?)


John Henry died immediately after winning his competition. He's like a 19th century Kasparov or Lee Sedol, a notable domino of human superiority falling forever.


Humans have tools, AI is no different. The sole human born out of the womb might not be able to beat AI but we’ve augmented our ability forever.

Guy beats powerful AI at go with computer.

The alpha go team used a computer to beat Lee

https://www.ft.com/content/175e5314-a7f7-4741-a786-273219f43...


Before Kasparov was beaten, he was the best chess player.

Then we saw human-AI teams, "centaurs", which beat any AI and any human.

Now the best chess AI are only held back by humans.

We don't know if humans augmented by any given AI, general or special-purpose, will generally beat humans who just blindly listen to AI, but we do know it's not always worth having a human in the loop.


Now the best chess AI are only held back by humans.

What does this mean ?


That a human collaborating with an AI, will generally lose to an AI playing alone.


But humans build those systems and are interested in the results. It’s still a human endeavour.


It sounds like you don’t understand what is claimed.

There is an often repeated claim that while the best AI has beaten the best human chess player, a combined human/AI player beats the purely AI player. The idea is that an AI and a human collaborating together will play better chess than just the AI alone. This arangement (an AI and a human collaborating together to play as one) is often called a “centaur”. Akin to the mythical horse/human hybrids.

The sentence you asked about, the “Now the best chess AI are only held back by humans” claims that these “centaurs” are no longer better players than the AI alone. That the addition of a human who is meddling with the thinking or the decision making of the AI makes the AI plays worse chess than if the human would be not present.

Sure, humans built the systems and they are interested in the results. Yes it is a human endeavour. That is not what the claim disagrees with. It disagrees that a human meddling with the AI mid-game can improve on the outcomes.


"The idea is that an AI and a human collaborating together will play better chess than just the AI alone."

No my message is, no AI lives in a vacuum. Nothing to do with hybrid people or such rubbish.


“According to researcher Scott Reynolds Nelson, the actual John Henry was born in 1848 in New Jersey and died of silicosis and not due to exhaustion of work.[4]”


Human free military? Thanks but I’ve already had my potent weed for the day. Cmon have a filter for common sense.


You don’t need to get high to come up with this.

Human free military is literally the wet dream of all generals and military higher ups. As we speak there are dozens of mil-tech startups and huge enterprises that work on exactly that


It's not even a dream, militaries are already many orders of magnitude more capable per soldier compared to the past. That ratio will keep increasing at an even faster rate with new technologies like AI.


The day we have a “human free military” is the day that every human on earth wakes up to find they just got enlisted without notice.

Game theory only works when you have skin in the game. You think the other nations’ machines are just going to wreck your machines and then go home?


I mean yes? Wrecking the other nations machines means your back in the stone ages ?


I think common sense filters are what prevents accurate predictions of black swan events. Like AI singularity or COVID outbreak. People who usually are capable of reasoning get to a conclusion--e.g AI doom and then dismiss it out of hand by things like it's more likely the logic is flawed bc the prior is so low. But if you're confident in your reasoning--sometimes you have to accept that extremely low prior things can happen


Look, judging things by how they feel has gotten me this far. Why change now?


The pursuit of optimality would be one possible reason.

Judging things by how they feel is a big part of what brought climate change to our door.


It's gone 100% perfectly and you've never ever had a shower thought where you second guessed yourself?


A field of antipersonnel mines is already a human-free military.


Ukraine hooked up image recognition software to a machine gun because they kept losing gunners. They also can use the same tech to auto-track targets with kamikaze quad rotors. It's already happening.


> Under the subtitle “Human writing is full of lies that are difficult to disprove theoretically” which only matter if you think that the AGI needs to learn from text. As opposed to conducting its own experiments, or gaining insight into raw sensor data from experiments.

Worse than that. The author has not been paying attention at all, because at this point LLMs clearly demonstrated that text encodes a lot of information about reality indirectly, and a model can learn those patterns. The amount learning LLMs already extract from text is maybe not as big as Eliezer would have you believe Bayes theorem allows for, but way further in that direction than the author believes.

There are good arguments made against the AI X-risk fears. This article does not make any of them. Rather, it just reiterates the usual basic misconceptions, mixed with a hefty dose of contempt towards people the author thinks are wrong.


Articles like this one make me think we are more likely doomed than previously thought.

If others who are skeptical about doom-like scenarios are making similar fundamental errors whilst acting as though they know for certain that the concern is unjustified, that just seems too consistent with the idea of arrogant humans wiping themselves out, unfortunately.


Someone with deep knowledge of both human nature and AI could do a lot of damage.


> if you think that the AGI needs to learn from text. As opposed to conducting its own experiments

The 2010s saw two basic approaches to building AI. In one corner there was the Google Brain approach of building NNs that worked with language motivated by use cases like search and Google Translate. They trained models to predict the next words in sequences extracted from the internet. In the other corner there was reinforcement DL mostly championed by DeepMind, where they trained agents to achieve goals in simplified "real world" environments like video games, based on "sensor data".

The former approach won completely. Who is still training RL agents to play games? Last I heard, DeepMind have been almost entirely re-allocated to implementing LLM integrations into Google products. It's been years since we heard boasts of beating some new video game benchmark, and the papers coming out on that topic now are mostly looking for ways to combine jointly trained vision/LLM models to reason through physical world scenarios.

We've run this experiment for over a decade now. All the results suggest that useful artificial intelligence comes from training on the internet, not birthing some blank slate that rediscovers fire from first principles. So this really isn't an unreasonable perspective for the author to take.

Nor is the first statement wrong either. Have we already forgotten the one-sentence open letter by hundreds of assorted experts which said "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".

https://thebulletin.org/2023/12/policy-makers-should-plan-fo...

The only type of human level AI we currently have are LLMs and related models, yet they claimed that mitigating extinction risk should be a "global priority". This is some sort of reverse motte-and-bailey strategy in which existing tech is used to create interest, and then the the imaginary risks of a hypothetical technology is used to demand power.


"The idea is not that LLMs themselves become a threat."

Not my idea either, but I did hear and see that idea expressed many times already. Usually more dramatic, the less the persons knows about the actual tech, but even here I saw that point of view.


I don't think so either, but I think that LLM's are closer to AGI than what most people seem to think.

I think if you let an LLM, even with the models we have today, run in a loop and you get something that comes very close to consciousness. Consciousness, mind you, similar to that of a deaf, blind and otherwise sensory deprived person, which in itself is a bit abstract already. Give it senses and ways to interact with the world, and I would argue that what you have is a type of AGI. By that I do not mean "equivalent to humans", which I don't think is a very good definition of intelligence, but a different branch of evolution. One which I can easily see surpassing human intelligence in a near future.


A feedback loop, sensors, and a goal; boom, AGI.

So many people are stuck on AI hallucinations. How many times have you seen something that, for the briefest of instants, you thought was something else?

"Orange circle on kitchen counter, brain says basketball, but, wait, more context, not a real circle, still orange, furry, oh, a cat; it is my orange cat" and all that takes place faster than you blink.

You didn't hallucinate, your brain ran a feedback loop and based on past experiences it filled in details and as those details were validated they stayed (still looking at the kitchen counter, orange thing unexpected) and then some details deviated from brain expectations (the orange ball is not behaving like a ball) and so new context is immediately fed back in and understanding restored.

For AGI you have to have a loop with feedback. An LLM is one leg of the stool, now add a way to realize when something it generates is failing to stand up to prior experience or existing need and be able to gather more context so it can test and validate its models.

Really, that's how anything learns; how could it be anything else?


>Give it senses and ways to interact with the world

Much easier said than done.

You can take just one example, energy storage and usage, to illustrate how far off we are from giving AI real-world physical capabilities.

Human biological-based energy management capabilities are many orders of magnitude more advanced than current battery technology.

For any AGI to have seriously scary consequences, it needs to have a physical real-world presence that is at least comparable to humans. We're very far from that. Decades at least, if not centuries.

The more realistic near-term threat is a non physical presence AGI being used by bad actor humans for malevolent ends.


>For any AGI to have seriously scary consequences, it needs to have a physical real-world presence that is at least comparable to humans

That doesn't seem true to me at all.

In fact, I don't see why it needs to have any physical presence to be scary. You can wreak a lot of havoc with just an internet connection.


Can you cite examples. I haven't seen any personally.


I don't usually bookmark or otherwise save weird opinions, but will do so next time (most of the threads here, when ChatGPT became big and hyped contained many of them as far as I remember, maybe I take a nostalgic look)


The paperclip maximizer, AKA instrumental convergence theory states not that an artificial intelligence would decide to kill humanity, but rather if it has sufficient power it might inadvertently destroy humanity by e.g. using up all resources for computational power.

https://en.m.wikipedia.org/wiki/Instrumental_convergence


We already have these - they're called corporations.


Now imagine one working 1000x faster and not being as dumb as a mind running on bureaucracy is.


What does that have to do with the current LLM's?


I also don't think instrumental convergence is a risk from LLMs.

But: using up all resources for computational power might well kill a lot of — not all — humans.

Why? Imagine that some future version can produce human quality output at a human speed for an electrical power draw of 1 kW. At current prices this would cost about the same to run continuously as the UN abject poverty threshold, but it's also four times the current global electricity supply which means that electricity prices would have to go up until human and AI labour were equally priced. But I think that happens at a level where most people stop being able to afford electricity for things like "keeping food refrigerated", let alone "keep the AC or heat pumps running so it's possible to survive summer heatstroke or winter freezes".

Devil's in the details though; if some future AI only gets that good around 2030 or so, renewables are likely to at that level all by themselves and exceed it shortly after, and then this particular conflict doesn't happen. Hopefully at that point, AI driven tractors and harvesters etc. get us our food, UBI etc., because that's only a good future if you don't need to work, because if you do still need to work, you're uncompetitive and out of luck.


Yes. The whole of Reddit seemed to be signed up to that idea for about six months of 2023.


Yes and can we please reinforce the following key idea "the less the person knows about the actual tech".

Far from being an expert, but in tech for 3 decades, I wonder at most comments I hear about AI around restaurants, terraces, bars, family gatherings... the stuff I hear, oh boy.

It's easy to become a follower when you've never seen the origin...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: