Hacker News new | past | comments | ask | show | jobs | submit login

Hello from an AI safety ninny. I have posted these two concerns multiple times and no one posted any counters to them.

1. There was https://www.youtube.com/watch?v=xoVJKj8lcNQ where they argued for 2028 and on will be AI elections where the person with most computing power wins.

2. Propaganda produced by humans on small scale killed 300 000 people in the US alone in this pandemic https://www.npr.org/sections/health-shots/2022/05/13/1098071... imagine the next pandemic when it'll be produced on an industrial scale by LLMs. Literally millions will die of it.




None of this seems related to LLMs. Propaganda produced by humans is effective because of the massive scale of distribution, being able to produce more variations of the same talking points doesn't change the threat risk.


Being able to produce more variations of the same talking points sounds really useful for increasing the scale of distribution - you can much more easily maintain more legitimate looking sock puppet accounts that can appear to more organically agree with your talking points.


I don't think it moves the needle much at all. At the end of the day the scaling bottleneck is access to gullible or ideologically motivated eyeballs. The internet is already over-saturated with more propaganda than any individual can consume, adding more shit to the pile isn't going to suddenly convince a reasonable person that vaccines have microchips inside.


Have you seen Fox News?


The fix to neither lies in technology. And it doesn't lie in AI alignment.

We cannot align AI because WE are not aligned. For 50% of congress (you can pick your party as the other side, regardless which one you are), the "AI creates misinformation" narrative sounds like "Oh great, I get re-elected easier").

This is a governance and regulation problem - not a technology problem.

Big tech would love you to think that "they can solve AI" if we follow the China model of just forcing everything to go through big tech and they'll regulate it pliantly in exchange for market protection and the more pressure there is on their existing growth models, the more excited they are about pushing this angle.

Capitalism requires constant growth, which unfortunately is very challenging given diminishing returns in R&D. You can only optimize the internal combusion engine for so long before the costs of incremental increases start killing your profit, and the same is true to any other technology.

And so now we have big Knife Company who are telling governments that they will only sell blunt knifes and nobody will ever get hurt, and that's the only way nobody gets hurt because if there's dozens of knife stores, who is gonna regulate those effectively.

So no, I don't think your concerns are actually related to AI. They are related to society, and you're buying into the narrative that we can fix it with technology if only we give the power over that technology to permanent large gate-keepers.

The risks you flag are related to: - Distribution of content at scale. - Erosion of trust (anyone can buy a safety mark). - Lack of regulation and enforcement of said risks. - The dilemma of where the limits of free speech and tolerance lie.

Many of those have existed since Fox News.


What you are saying is neither here, nor there.

All we need to do is ban generative AIs. Now. Before it's too late.

Simple.


“All we have to do is ban gunpowder and our castles will protect us”

“All we have to do is prohibit alcohol”

“All we have to do is prevent printing press ownership”

You cannot be that naive

AI actually is even simpler than these technologies - the math is already out, the GPUs powering every video game.

That train left the station somewhere around Data is all you need.

You are clinging to the illusion that humans can ban technology that is power relevant.


You should not worry about AI problems by 2028. Dozens of millions worldwide will die from climate-related problems by that time. Literally, nobody will care about the topic of AGI anymore.


You should worry about both problems. You're telling me that AI isn't going to improve it's video capabilities in the next 4 years enough to make convincing deepfakes?


It already does. And I'm not worried. This is to be mitigated by law enforcement not by AI forbidding.


How can you effectively enforce anything if the models are open source? How do you draw the line if a deepfake is not defamatory (making someone say something they didn't say) but in fact just makes someone look silly https://en.wikipedia.org/wiki/Ed_Miliband_bacon_sandwich_pho.... Or using LLMs to scale up what happened with cambridge analytica and create individualized campaigns and bots to influence elections?


You should handle it as any other crime. Why do you ask? It does not matter how good the gun is, what matters is who has pulled the trigger.


Yes but if we had the ability to download a gun from the internet anonymously with no way to feasibly get the identity of the person downloading the gun I think we would be right to be concerned. Especially if you could then shoot that gun at someone anonymously.


>> Yes but if we had the ability to download a gun from the internet anonymously with no way to feasibly get the identity of the person downloading

But you can. There are blueprints for 3D printers circulating for a decade now ...


And many countries ban the possession or distribution of those blueprints and the united states had a ban on re-publication of those 3d designs from 2018 until trump reversed it, and even now it requires a license to post blueprints online.

And you failed to respond to the argument that you can anonymously post deepfakes with no way of tracing it back to you, and so it becomes impossible to enforce. You can't shoot someone with a guarantee that there will be no trace with a 3d printed gun.

Nevermind the fact that it's not even clear it should be a crime in some cases. Should ai production of a ed milliband sandwich style photo be banned?

And should replying to a user with personalized responses based on the data you've collected about them based on their facebook likes with LLMs be illegal? I don't think so, but doing it on a mass scale sounds pretty scary.


>> And you failed to respond to the argument that you can anonymously post deepfakes

You can't post them anonymously; even Tor can't give you a 100% guarantee. Not for a very long time, and not if the law after you. If the AGI is on the side of law enforcement, especially. Law enforcement will become more expensive.

It's just a different scale of warfare. Nothing really changes except the amount, speed, and frequency of the casualties.

And any argument you make is absolutely applicable to each corporation right now. Do you prefer the dystopian dictatorship of the corps or the balance of powers?


I don't like where we are headed at all. I acknowledge we face two dystopian options which is either contribute power in the hands of a few corporations who hopefully you can regulate, or have open source models which ends up delivering significant power to people who cannot be effectively controlled. An AGI law enforcement? How dystopian can you get.


How can you believe that it will be enough to regulate them? Here is the problem: "a few corporations whom you hopefully can regulate." When they have the power of an AGI with high intelligence and access to all available information on their side, there is no scenario where you would control them. They would control you.

>> How dystopian can you get.

Oh I have very good imagination ... But I'm stupid and I have hope ...


Open source or not makes no different. It can run in China or Russia, or vietnam or any other nation that doesn’t ban it because it understands the economic power and you pay them on Fiver.

It’s already true for almost anything. You need a deepfake, you can get it for a dollar on a VN web forum. Banning it won’t change a thing. Software piracy is “banned”. Sharing mp3s is “banned”. It makes no difference.

The Fake News and Misinformation on Facebook to influence the US election was legal - AI or not.

To make it illegal you’d need to change the very power consensus of the US, so it won’t happen. People understand that well enough to instead scream at Technology because with that they retain an illusion that it can save them.

The only way to enforce it would be to force everyone to give up general purpose compute and submit to constant client scanning.

If you are afraid enough of AI to not see how that’s a bad idea, you’re ripe for a fascist takeover.

Imagine you lived through the adaptation of gunpowder. That’s where we are. And if you live in the US and see the failure to even ban guns which are physical - how can you have illusions about AI


It seems like you agree with me then?


100%




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: