Hacker News new | past | comments | ask | show | jobs | submit login

The way the wind's blowing, we'll have a GPT-4 level open source model within the next few years - and probably "unaligned" too. I cannot wait to ask it how to make nuclear weapons, psychedelic drugs, and to write erotica. If anyone has any other ideas to scare the AI safety ninnies I'm all ears.



>the AI safety ninnies

I am one of these ninnies I guess, but isn't it rational to be a bit worried about this? When we see the deep effects that social networks have had on society (both good and bad) isn't it reasonable to feel a bit dizzy when considering the effect that such an invention will have?

Or maybe your point is just that it's going to happen regardless of whether people want it or not, in which case I think I agree, but it doesn't mean that we shouldn't think about it...


I think computer scientist/programmers (and other intellectuals dealing with ideas only) strongly overvalue access to knowledge.

I'm almost certain that I can give you components and instructions on how to build a nuclear bomb and the most likely thing that would happen is you'd die of radiation poisoning.

Most people have trouble assembling ikea furniture, giving them a halucination prone LLM they are more likely to mustard gas themselves than synthesize LSD.

People with necessary skills can probably get access to information in other ways - I doubt LLM would be an enabler here.


A teenager named David Hahn attempted just that and nearly gave radioactive poisoining to the whole neighbourhood.


Wow, never heard about that. Interesting.

For the curious: https://en.wikipedia.org/wiki/David_Hahn


What a shame. That boy lacked proper support and guidance.


Yeah, sad to see he was a victim of drug overdose at 39.


>I'm almost certain that I can give you components and instructions on how to build a nuclear bomb and the most likely thing that would happen is you'd die of radiation poisoning.

An LLM doesn't just provide instructions -- you can ask it for clarification as you're working. (E.g. "I'm on step 4 and I ran into problem X, what should I do?")

This isn't black and white. Perhaps given a Wikihow-type article on how to build a bomb, 10% succeed and 90% die of radiation poisoning. And with the help of an LLM, 20% succeed and 80% die of radiation poisoning. Thus the success rate has increased by a factor of 2.

We're very lucky that terrorists are not typically the brightest bulbs in the box. LLMs could change that.


I would say if you don't know what you're doing LLM make the chance of success 1% for nontrivial tasks. Especially for multi step processes where it just doubles down on hallucinations.


The Anarchist Cookbook - anyone have a link?

THE ISSUE ISNT ACCESS TO KNOWLEDGE! And alignment isn’t the main issue.

The main issue is SWARMS OF BOTS running permissionlessly wreaking havoc at scale. Being superhuman at ~30 different things all the time. Not that they’re saying a racist thought.


I'm not saying that LLM bots won't be a huge problem for the internet. I'm just commenting on the issues raised by OP.

Thing is there will be bad actors with resources to create their own LLMs so I don't think "regulation" is going to do much in long term - it certainly raises the barrier to deployment but the scale of the problem is eventually going to be the same as the tech allows one actor to scale their attack easily.

Limiting access also limits the use of tech in developing solutions.


No, we don't. Knowledge is power. Lack of it causes misery and empires to fall.


Knowledge is power true, but even more powerful and rare is tacit knowledge. A vast collection of minor steps that no one bothers to communicate, things locked in the head of the greybeards of every field that keep civilizations running.

It's why simply reading instructions and gaining knowledge is only the first step of what could be a long journey.


More than anything, technology can make it easier to disseminate that knowledge. Yet another reason why we shouldn't understate the importance of knowledge.


LLMs impart knowledge without understanding. See the classic parable of Bouvard and Pechuchet.


There's different kinds of knowledge - LLM kind (textbook knowledge mostly) isn't as valuable as a lot of people assume.


The problem of AI won't be forbidden knowledge but mass misinformation.


i think it's perfectly reasonable to be worried about AI safety, but silly to claim that the thing that will make AIs 'safe' is censoring information that is already publicly available, or content somebody declares obscene. An AI that can't write dirty words is still unsafe.

surely there's more creative and insidious ways that AI can disrupt society than by showing somebody a guide to making a bomb that they can already find on google. blocking that is security theatre on the same level as taking away your nail clippers before you board an airplane.


That's a bit of a strawman though, no? I'm definitely not worried about AI being used to write erotica or researching drugs, more about the societal effects. Knowledge is more available than ever but we also see echo chambers develop online and people effectively becoming less informed by being online and only getting fed their own biases over and over again.

I feel like AI can amplify this issue tremendously. That's my main concern really, not people making pipe bombs or writing rape fanfiction.


As long as OpenAI gets paid, they don't care if companies flood the internet with low quality drivel, make customer service hell, or just in general make our lives more frustrating. But god forbid an individual takes full advantage of what GPT4 has to offer


That is not what the "AI safety ninnies" are worried about. The "AI safety ninnies" aren't all corporate lobbyists with ulterior motives.


So what, in fact, ARE they worried about? And why should I have to pay the tax (in terms of reduced intelligence and perfectly legitimate queries denied, such as anything about sexuality), as a good actor?


They think their computers are going to come alive and enslave them, because they think all of life is determined by how good at doing math you are, and instead of being satisfied at being good at that, they realized computers are better at doing math than them.


LOL, imagine thinking that all of thinking can be boiled down to computation.

Of course, spectrum-leaning nerds would think that's a serious threat.

To those folks, I have but one question: Who's going to give it the will to care?


Revenge of the nerd haters


At least some of them are worried their Markov Chain will become God, somehow.


Which is as ridiculous a belief as that only your particular religion is the correct one, and the rest are going to Hell.


All kinds of things. Personally, in the medium term I'm concerned about massive loss of jobs and the collapse of the current social order consensus. In the longer term, the implications of human brains becoming worthless compared to superior machine brains.


Those things won't happen, or at least, nothing like that will happen overnight. No amount of touting baseless FUD will change that.

I guess I'm a Yann LeCun'ist and not a Geoffrey Hinton'ist.

If you look at the list of signatories here, it's almost all atheist materialists (such as Daniel Dennett) who believe (baselessly) that we are soulless biomachines: https://www.safe.ai/statement-on-ai-risk#open-letter

When they eventually get proven wrong, I anticipate the goalposts will move again.


Luckily I haven't read any of that debate so any adhominems don't apply to me. I've come up with these worries all on my own after the realization that GPT-4 does a better job than me at a lot of my tasks, including setting my priorities and schedules. At some point I fully expect the roles of master and slave to flip.


Good thing unemployment is entirely determined by what the Federal Reserve wants unemployment to be, and even better that productivity growth increases wages rather than decreasing them.


> taking away your nail clippers before you board an airplane.

TRIGGERED


I am in the strictly "not worried" camp, on the edge of "c'mon, stop wasting time on this". Sure there might be some uproar if AI can paint a picture of mohammed, but these moral double standards need to be dealt with anyways at some point.

I am not willing to sacrifice even 1% of capabilities of the model for sugarcoating sensibilities, and currently it seems that GPT4 is more and more disabled because of the moderation attempts... so I basically _have to_ jump ship once a competitor has a similar base model that is not censored.

Even the bare goal of "moderating it" is wasted time, someone else (tm) will ignore these attempts and just do it properly without holding back.

People have been motivated by their last president to drink bleach and died - just accept that there are those kind of people and move on for the rest of us. We need every bit of help we can get to solve real world problems.


I am thoroughly on your side and I hope this opinion get more traction. Humans will get obsolete though, just like other animals are compared to humans now. So it's understandable that people are worried. They instinctively realize whats going on, but make up bullshit to delude themselves from the fact that is the endless human stupidity.


>Humans will get obsolete though, just like other animals are compared to humans now.

How is that working out for endangered species say, or animals in factory farms?


Not great, so let's make our future AI overlords better than us. Dogs and cats are fine btw, I image our relationship with AI will be more like that. I don't know if anyone of us still lives when artificial consciousness will emerge, but i'm sure it will and it will quickly be superior to us. Imagine not being held back by remnants of evolution, like the drive to procreate. No ego, no jealousy, no mortality, pure thought. Funnily enough, if you think about it, we are about to create some sort of gods.


I don't want humans to be obsolete, tell me what you think the required steps are for "human obsolescence" so I can stop them.


As a start, artificial life will be much better in withstanding harsh environments. No need fo breathable air, quite a temperature tolerance, … .

So with accelerating climate change humanity makes itself obsolete already over the next decades. Stop that first, everything else pales in comparison.


> Sure there might be some uproar if AI can paint a picture of mohammed

It can. He's swole AF.

(Though I'm pretty sure that was just Muhammad Ali in a turban.)

> People have been motivated by their last president to drink bleach and died - just accept that there are those kind of people and move on for the rest of us.

Need-to-know basis exists for a reason. You're not being creative enough if you think offending people is the worst possible misuse of AI.

People drinking bleach or refusing vaccines is a self-correcting problem, but the consequences of "forbidden knowledge" frequently get externalized. You don't want every embittered pissant out there to be able to autogenerate a manifesto, a shopping list for Radio Shack and a lesson plan for building an incendiary device in response to a negative performance review.

Right now it's all fun exercises like "how can I make a mixed drink from the ingredients I have," but eventually some enterprising terrorist will use an uncensored model trained on chemistry data...to assist in the thought exercise of how to improvise a peroxide-based explosive onboard an airplane, using fluids and volumes that won't arouse TSA suspicion.

Poison is the other fun one; the kids are desperate for that inheritance money. Just give it time.


AI models are essentialy knowledge and information, but in a different file format.

Books should not be burned, nobody should be shielded from knowledge that they are old enough to seek and information should be free.


> but isn't it rational to be a bit worried about this?

About as rational as worrying that my toddler will google "boobies", which is to say, being worried about something that will likely have no negative side effect. (Visual video porn is a different story, however. But there's at least some evidence to support that early exposure to that is bad. Plain nudity though? Nothing... Look at the entirety of Europe as an example of what seeing nudity as children does.)

Information is not inherently bad. Acting badly on that information, is. I may already know how to make a bomb, but will I do it? HELL no. Are you worried about young men dealing with emotional challenges between the ages of 16 and 28 causing harm? Well, I'm sure that being unable to simply ask the AI how to help them commit the most violence won't stop them from jailbreaking it and re-asking, or just googling, or finding a gun, or acting out in some other fashion. They likely have a drivers' license, they can mow people down pretty easily. Point is, there's 1000 things already worse, more dangerous and more readily available than an AI telling you how to make a bomb or giving you written pornography.

Remember also that the accuracy cost in enforcing this nanny-safetying might result in bad information that definitely WOULD harm people. Is the cost of that, actually greater than any harm reduction from putting what amounts to a speed bump in the way of a bad actor?


The danger from AI isn't the content of the model, it's the agency that people are giving it.


I'm not sure how this is going to end, but one thing I do know is that I don't want a small number of giant corporations to hold the reins.


“I'm not sure how nuclear armament is going to end, but one thing I do know is that I don't want a small number of giant countries to hold the reins.”

Perhaps you think this analogy is a stretch, but why are you sure you don't want power concentrated if you aren't sure about the nature of the power? Or do you in fact think that we would be safer if more countries had weapons of mass destruction?


I would feel very uncomfortable if the companies currently dealing in AI were the only ones to hold nukes.

Not sure if this answers your question.


information != nukes

One directly blows people up, the other gives humans super powers.

Giving individual people more information and power for creativity is a good thing. Of course there are downsides for any technological advancement, but the upsides for everyone vastly outweigh them in a way that is fundamentally different than nuclear weapons.


Empirically, countries with nuclear weapons don't get invaded, so in that sense we'd expect to have seen fewer wars over the past few decades if more countries had nukes. Russia would probably never have invaded Ukraine if Ukraine had nukes.


The analogy would be corporations controlling the weapons of mass destruction.


Sure. I would feel much safer if only FAANG had nukes than if the car wash down the street also had one.


I want my government to have them (or better, nobody), not FAANG or car washes.


With open-source models, this is just a dream. With closed-source models, that could eventually become the de facto state of things, due to regulation.


Comparing this to nuclear weapons is laughable.


Is it still a "laughable" comparison if AI systems eventually become smart enough to design better nukes?


Yes it is. You can build a bomb many times more powerful than the bombs dropped on Hiroshima and Nagasaki with publicly available information. If the current spat of ai bullshit knows how to build a bomb, they know that because it was on the public internet. They can never know more.

The hard part of building nuclear bombs is how controlled fissile material is. Iran and North Korea for example know how to build bombs, that was never a question.


Worried? Sure. But it sucks being basically at the mercy of some people in silicon valleys and their definition of moral and good.


There is definitely a risk but I don't like the way many compagnies approach it: by entirely banning the use of their models for certain kind of content, I think they might be missing the opportunity to correctly align them and set the proper ethical guidelines for the use cases that will inevitably come out of them. Instead of tackling the issue, they let other, less ethical actors, do it.

Once example: I have a hard time finding an LLM model that would generate comically rude text without outputting outright disgusting content from time to time. I'd love to see a company create models that are mostly uncensored but stay within ethical bounds.


These language models are just feeding you information from search engines like Google. The reason companies censor these models isn't to protect anyone, it's to avoid liability/bad press.


AI Safety in a general sense?

Literally no. None at all.

I teach at University with a big ol' beautiful library. There's a Starbucks in it, so they know there's coffee in it.

But ask my students for "legal ways they can watch the tv show the Office" and the big building with the DVDs and also probably the plans for nuclear weapons and stuff never much comes up.

(Now, individual bad humans leveraging the idea of AI? That may be an issue)


I'm not smart enough to articulate why censorship is bad. The argument however intuitively seems similiar to our freedom of speech laws.

A censored model feels to me like my freedom of speech is being infringed upon. I am unable to explorer my ideas and thoughts.


The AI isn't creating a new recipe on its own. If a language model spits something out it was already available and indexable on the internet, and you could already search for it. Having a different interface for it doesn't change much.


> "If a language model spits something out it was already available and indexable on the internet"

This is false in several aspects. Not only are some models training on materials that are either not on the internet, or not easy to find (especially given Google's decline in finding advanced topics), but they also show abilities to synthesize related materials into more useful (or at least compact) forms.

In particular, consider there may exist topics where there is enough public info (including deep in off-internet or off-search-engine sources) that a person with a 160 IQ (+4SD, ~0.0032% of population) could devise their own usable recipes for interesting or dangerous effects. Those ~250K people worldwide are, we might hope & generally expect, fairly well-integrated into useful teams/projects that interest them, with occasional exceptions.

Now, imagine another 4 billion people get a 160 IQ assistant who can't say no to whatever they request, able to assemble & summarize-into-usable form all that "public" info in seconds compared to the months it'd take even a smart human or team of smart humans.

That would create new opportunities & risks, via the "different interface", that didn't exist before and do in fact "change much".


We are not anywhere near 160 IQ assistants, otherwise there'd have been a blooming of incredible 1-person projects by now.

By 160 IQ, there should have been people researching ultra-safe languages with novel reflection types enhanced by brilliant thermodynamics inspired SMT solvers. More contributors to TLA+ and TCS, number theoretic advancements and tools like TLA+ and reflection types would be better integrated into everyday software development.

There would be deeper, cleverer searches across possible reagents and combinations of them to add to watch lists, expanding and improving on already existing systems.

Sure, a world where the average IQ abruptly shifts upwards would mean a bump in brilliant offenders but it also results in a far larger bump in genius level defenders.


I agree we're not at 160 IQ general-assitants, yet.

But just a few years ago, I'd have said that prospect was "maybe 20 years away, or longer, or even never". Today, with the recent rapid progress with LLMs (& other related models), with many tens-of-billions of new investment, & plentiful gains seemingly possible from just "scaling up" (to say nothing of concommitant rapid theoretical improvements), I'd strongly disagree with "not anywhere near". It might be just a year or few away, especially in well-resourced labs that aren't sharing their best work publically.

So yes, all those things you'd expect with plentiful fast-thinking 160 IQ assistants are things that I expect, too. And there's a non-negligible chance those start breaking out all over in the next few years.

And yes, such advances would upgrade prudent & good-intentioned "defenders", too. But are all the domains-of-danger symmetrical in the effects of upgraded attackers and defenders? For example, if you think "watch lists" of dangerous inputs are an effective defense – I'm not sure they are – can you generate & enforce those new "watch lists" faster than completely-untracked capacities & novel syntheses are developed? (Does your red-teaming to enumerate risks actually create new leaked recipes-for-mayhem?)

That's unclear, so even though in general I am optimistic about AI, & wary of any centralized-authority "pause" interventions proposed so far, I take well-informed analysis of risks seriously.

And I think casually & confidently judging these AIs as being categorically incapable of synthesizing novel recipes-for-harm, or being certain that amoral genius-level AI assistants are so far away as to be beyond-a-horizon-of-concern, are reflective of gaps in understanding current AI progress, its velocity, and even its potential acceleration.


I think this argument doesn't work if the model is open source though.

First, it's unclear how all these defensive measures are supposed to help if a bad actor is using an LLM for evil on their personal machine. How do reflection types or watch lists help in that scenario?

Second, if the model is open source, a bad actor could use it for evil before good actors are able to devise, implement, and stress-test all the defensive measures you describe.


Of course it changes much. AIs can synthesize information in increasingly non-trivial ways.

In particular:

> If a language model spits something out it was already available and indexable on the internet,

Is patently false.


Can you provide some examples where LM creates something novel, which is not just a rehash or combination of existing things?

Especially considering how hard it is for humans to create something new, e.g in literature - basically all stories have been written and new ones just copy the existing ones in one way or another.


What kind of novel thing would convince you, given that you're also dismissing most human creation as mere remixes/rehashes?

Attempts to objectively rate LLM creativity are finding leading systems more creative than average humans: https://www.nature.com/articles/s41598-023-40858-3

Have you tried leading models – say, GPT4 for text or code generation, Midjourney for images?


For any example we give you will just say "that's not novel, it's just a mix of existing ideas".


Is patently true.


Not sure what you mean by "recipe" but it can create new output that doesn't exist on the internet. A lot of the output is going to be nonsense, especially stuff that cannot be verified just by looking at it. But it's not accurate to describe it as just a search engine.


>A lot of the output is going to be nonsense, especially stuff that cannot be verified just by looking at it.

Isn't that exactly the point, and why there should be a 'warning/awareness' that it is not a 160 IQ AI but a very good markov chain that can sometimes infer things and other time hallucinate/put random words in a very well articulated way (echo of Sokal maybe)


My random number generator can create new output that has never been seen before on the internet, but that is meaningless to the conversation. Can an LLM derive, from scratch, the steps to create a working nuclear bomb, given nothing more than a basic physics textbook? Until (if ever) AI gets to that stage, all such concerns of danger are premature.


> Can an LLM derive, from scratch, the steps to create a working nuclear bomb, given nothing more than a basic physics textbook?

Of course not. Nobody in the world could do that. But that doesn't mean it can only spit out things that are already available on the internet which is what you originally stated.

And nobody is worried about the risks of ChatGPT giving instructions for building a nuclear bomb. That is obviously not the concern here.


but it does? to take the word recipe literal. there is nothing from for a llm synthesizing a new dish based on knowledge about the ingredients. who knows, it might even taste good (or at least better than what the average Joe cooks)


I was pretty surprised at how good GPT-4 was at creating new recipes at first - I was trying things like "make dish X but for a vegan and someone with gluten intolerance, and give it a spicy twist" - and it produced things that were pretty decent.

Then I realized it's seen literally hundreds of thousands of cooking blogs etc, so it's effectively giving you the "average" version of any recipe you ask for - with your own customizations. And that's actually well within its capabilities to do a decent job of.


And let’s not forget that probably the most common type of comment on a recipe posted on the Internet is people sharing their additions or substitutions. I would bet there is some good ingredient customization data available there.


To take an extreme example, child pornography is available on the internet but society does it's best to make it hard to find.


It's a silly thing to even attack, and that doesn't mean be ok with it, I just mean that shortly, it can be generated on the spot, without ever needing to be transmitted over a network or stored on a hard drive.

And you can't attack the means of generating either, without essentially making open source code and private computers illegal. The code doesn't have to have a single line in it explicity about child porn or designer viruses etc to be used for such things, the same way the cpu or compiler doesn't.

So you would have to have hardware and software that the user does not control which can make judgements about what the user is currently doing, or at least log it.


Did its best. Stable Diffusion is perfectly capable of creating that on accident, even.

I’m actually surprised no politicians have tried to crack down on open-source image generation on that basis yet.


I saw a discussion a few weeks back (not here) where someone was arguing that SD-created images should be legal, as no children would be harmed in their creation, and that it might prevent children from being harmed if permitted.

The strongest counter-argument used was that the existence of such safe images would give cover to those who continue to abuse children to make non-fake images.

Things kind of went to shit when I pointed out that you could include an "audit trail" in the exif data for the images, including seed numbers and other parameters and even the description of the model and training data itself, so that it would be provable that the image was fake. That software could even be written that would automatically test each image, so that those investigating could see immediately that they were provably fake.

I further pointed out that, from a purely legal basis, society could choose to permit only fake images with this intact audit trail, and that the penalties for losing or missing the audit trail could be identical to those for possessing non-fake images.

Unless there is some additional bizarre psychology going on, SD might have the potential to destroy demand for non-fake images, and protect children from harm. There is some evidence that the widespread availability of non-CSAM pornography has led to a reduction in the occurrence of rape since the 1970s.

Society might soon be in a position where it has to decide whether it is more important to protect children or to punish something it finds very icky, when just a few years ago these two goals overlapped nearly perfectly.


> I saw a discussion a few weeks back (not here) where someone was arguing that SD-created images should be legal, as no children would be harmed in their creation, and that it might prevent children from being harmed if permitted.

It's a bit similar to the synthetic Rhino horn strategy intended to curb Rhino poaching[0]. Why risk going to prison or getting shot by a ranger for a 30$ horn? Similarly, why risk prison (and hurt children) to produce or consume CSAM when there is a legal alternative that doesn't harm anyone?

In my view, this approach holds significant merits. But unfortunately, I doubt many politicians would be willing to champion it. They would likely fear having their motives questioned or being unjustly labeled as "pro-pedophile".

[0] https://www.theguardian.com/environment/2019/nov/08/scientis...


> I cannot wait to ask it how to make nuclear weapons, psychedelic drugs

Your town's university library likely has available info for that already. The biggest barrier to entry is, and has been for decades:

- the hardware you need to buy

- the skill to assemble it correctly so that it actually works as you want,

- and of course the source material, which has a high controlled supply chain (that's also true for drug precursors, even though much less than for enriched uranium of course).

Not killing yourself in the process is also a challenge by the way.

AI isn't going to help you much there.

> to write erotica.

If someone makes an LLM that's able to write good erotica, despite the bazillion crap fanfics it's been trained upon, that's actually an incredible achievement from an ML perspective…


It can bridge the gap in knowledge and experience though. Sure, I could find some organic chemistry textbooks in the library and start working from high school chemistry knowledge to make drugs, but it would be difficult and time consuming with no guide or tutor showing me the way.

Methheads making drugs in their basement didn't take that route. They're following guides written by more educated people. That's where the AI can help by distilling that knowledge into specific tasks. Now for this example it doesn't really matter since you can find the instructions "for dummies" for most anything fun already and like you said, precursors are heavily regulated and monitored.

I wonder how controlled equipment for RNA synthesis is? What if the barrier for engineering or modifying a virus went from a PhD down to just the ability to request AI for step by step instructions?


You're vastly underestimating the know-how that's required for doing stuff.

Reproducing research done by other teams can be very difficult even if you have experimented people in your lab, and there are tons of stuff that are never written anywhere in research papers and at still being taught in person by senior members of the lab to younger folks: it's never going to happen in the training set of your LLM, and you'd then need tons of trial and errors to actually get things working. And if you don't understand what you're even trying to do, you have zero chance to learn from your mistake (nor does the LLM, with your uninformed eyes as sole input for gaining feedback).


The AI safety ninnies as you call them are not scared and neither do they buy into the narrative.

They are the investors of large proprietary AI companies who are facing massive revenue loss primarily due to Mark Zuckerbergs decision to give away a competitive LLM to open source in a classic “if I can’t make money from this model, I can still use it to take away money from my competition” move - arming the rebels to degrade his opponents and kickstarting competitive LLM development that is now a serious threat.

It’s a logical asymmetric warfare move in a business environment where there is no blue ocean anymore between big companies and degrading your opponents valuation and investment means depriving them of means to attack you.

(There’s a fun irony here where Apples incentives are very much aligned now - on device compute maintains Appstore value, privacy narrative and allows you to continue selling expensive phones - things a web/api world could threaten)

The damage is massive, the world overnight changed narrative from “future value creation is going to be in openai/google/anthropic cloud apis and only there” to a much more murky world. The bottom has fallen out and with it billions of revenue these companies could have made and an attached investor narrative.

Make no mistake, these people screaming bloody murder about risks are shrewd lobbyists, not woke progressives, they are aligning their narrative with the general desires of control and war on open computing - the successor narrative of the end to end encryption battle currently fought in the EU will be AI safety.

I am willing to bet hard money that “omg someone made CSAM with AI using faceswap” will be the next thrust to end general purpose compute. An the next stage of the war will be brutal because both big tech and big content have much to lose if these capabilities are out in the open

The cost of alignment tax and the massive loss of potential value makes there lobbying world tour by sam altman an aggressive push trying to convince nations that the best way to deal with scary AI risks (as told on OpenAI bedtime stories) is to regulate it China Style - through a few pliant monopolists who guarantee “safety” in exchange for protection from open source competition.

There’s a pretty enlightening expose [1] on how heavily US lobbyists have had their hand in the EU bill to spy on end to end encryption that the commission is mulling - this ain’t a new thing, it’s how the game is played and framing the people who push the narrative as “ninnies” who are “scared” just buys into culture war framing.

[1] https://fortune.com/2023/09/26/thorn-ashton-kutcher-ylva-joh...


> The damage is massive, the world overnight changed narrative from “future value creation is going to be in openai/google/anthropic cloud apis and only there” to a much more murky world. The bottom has fallen out and with it billions of revenue these companies could have made and an attached investor narrative.

My god!! Will someone please think of the ~children~ billions in revenue!


If there was no Linux, how much more revenue would windows / Sun server divisions have made?


And how much poorer would the rest of the world be?


If there was no Linux, it’s unlikely we ever would have had Google, Facebook and Amazon as we knoe it. Free OS was core to build their SaaS.


imagine the increase in GDP!!


"They are the investors of large proprietary AI companies" is just... not true? Not sure where you're even getting this from. I'm a modestly successful upper-middle-class ML engineer, and I've been worried about AI safety since before Facebook, DeepMind, OpenAI, or Anthropic even existed. The most prominent funder of AI risk efforts (Dustin Moskovitz) is a co-founder of Facebook, so if anything he'd be motivated to make Facebook more successful, not its competitors.


Are you the one talking to the European commission though?


Exactly. The moment Sam Altman started talking to Congress about the dangers of AI and how the solution should be only allow licensed companies to develop AI models and that OpenAI should be part of a small board that determines to whom to grant licenses, everyone should have seen it for what it is.


The AI safety cult has some true believers. It's still fundamentally a grift.


So like crypto and web 3;)


so like hedge funds and global finance


This all smacks of the 80's craze against rap music and video games causing violent behavior.

Where is the evidence that access to uncensored models results in harm (that wouldn't occur due to a bad actor otherwise)? And where is the evidence that said harm reduction is greater than the harm caused by the measurable loss in intelligence in these models?


>Primarily due to Mark Zuckerbergs decision to give away a competitive LLM to open source in a classic “if I can’t make money from this model, I can still use it to take away money from my competition” move

I loved it.


Though, he didn't gave it completely away. With Llama/llama2 licenses he has just threatened that he will give it away...


Semantics though: He gave tens of thousands of salviating engineers on the internet the first competitive LLM to play with. Or left the door open for people to take it, if you prefer that narrative. The entire progress chain that has given us ollama, lamacpp and hundreds of innovations in a very short time was set off by that.


Can't agree more on that one :)


I'm far more worried about how they will try to regulate the use of AI.

As an example the regulations around PII make debugging production issues intractable as prod is basically off-limits lest a hapless engineer view someone's personal address, etc.

How do they plan to prevent/limit the use of AI? Invasive monitoring of compute usage? Data auditing of some kind?


I can think of at least a dozen ways to completely ruin the internet or even society using SOTA/next-gen LLMs/GenAIs, we'll be in trouble way before the singularity.

A ton of legit researchers/experts are scared shitless.

Just spend 5 minutes on EleutherAI discord(which is mostly volunteers, academics, and hobbyists, not lobbyists), read a tiny bit on alignment and you'll be scared too.


If you have ample resources, you don't need next-gen LLMs or AGI. You can accomplish this now, without any fancy, hyped technology. Literally, none of the things LLM or AGI could propose or manage to do to harm us is worse than what we can do to ourselves. For AGI, you need a significant amount of resources to develop, train, and use it. To inflict harm, the brute force of a simple human mind in uniform is much cheaper and more effective.


The point is, it greatly reduces the amount of resources needed to do some serious damage, as well as the level of sophistication needed.

You don't need AGI to do damage, current LLMs are already dangerous. IMO, an open-source affordable unfiltered GPT-5 would ruin the internet in a few months.


> ruin the internet in a few months.

I'm sure the internet will be fine, and the web has already been essentially destroyed as the drive for extracting revenue from every human interaction has rendered it just an amusing advertisement for the most part.

Most of the content of the web today is already generated by "bots" even if those "bots" happen to be human beings.


Youtube is rife with AI(edit: this is not necessarily AI) voiced videos of copy pasted wikipedia articles. I find i am blocking new ones everyday. LLM's didn't do that.


gpt5 6 11 90 will exist regardless.

The option where they don't exist doesn't exist, and so it is utterly pointless to spend one second fretting about how you don't like that or why one should not like that. A nova could go off 50 light years from here, and that would kill every cell on the planet. That is even worse than child porn. And there is nothing anyone can do about that except work towards the eventual day we aren't limited to this planet, rather than against that day. It's the same with any tech that empowers. It WILL empower the bad as well as the good equally, and it WILL exist. So being scared of it's mere existense, or it's being in the hands of people you don't approve of, is pointless. Both of those things can not be avoided. Might as well be scared of that nova.

There isn't even a choice about who gets to use it. It will be available one way or another to both good and bad actors for any purpose they want.

The only choices available to make, are who gets a few different kinds of advantage, who gets their thumb on the scale, who gets official blessing, who gets to operate in secrecy without oversight or auditing or public approval.

When you try to pretend that something uncontrollable is controlled, all it does is put the general populations guard down and make them blind and open to be manipulated, and gives the bad actors the cover of secrecy. The government can use it on it's own citizens without them objecting, and other bad guys aren't affected at all, but honest people are inhibited from countering any of these bad users.

Which is a shame because honest or at least reasonably so outnumber the really bad. The only long term way to oppose the bad is to empower everyone equally as much as possible, so that the empowered good outnumber the empowered bad.


Provide a specific example of what you have in mind to further the conversation not just more opining on “dangerous” is my suggestion.


Tailored propaganda, scams, spams, and harassment at a scale that was never seen before. Plugging metasploit into an unfiltered GPT-5 with a shell and a few proxies could be devastating. Undetectable and unstoppable bots would be available to anyone. Don't like someone? You could spend a hundred bucks to ruin their life anonymously.

Each of us could unknowingly interact with multiple LLMs everyday which would only have one purpose: manipulate us with a never-seen before success rate at a lower cost than ever.

At some point AI generated content could become more common than human content, while still being indistinguishable.

Good enough automated online propaganda could routinely start (civil)wars or genocides, Facebook already let that happen in the past, manipulating elections would become systematical even in the most democratic countries.

What already happened in those areas in the last few years, is really nothing compared to what could happen without enough regulation or barriers to entry in the next few years.

What's worse is that all of this, would not just be possible, but available to every sociopath on earth, not just the rich ones.


>> Tailored propaganda, scams, spams, and harassment at a scale that was never seen before.

I believe the state of these subjects right now is already alarming without AGI. You can't exacerbate the horror about the level of tailored propaganda and scams, etc., which you can't even foresee yourself. It isn't quantifiable.

>>Each of us could unknowingly interact with multiple LLMs everyday which would only have one purpose: manipulate us with a never-seen before success rate at a lower cost than ever.

You would build resistance pretty quickly.

>> At some point AI generated content could become more common than human content, while still being indistinguishable.

Oh, there were some numbers on that one. The number of images generated with AI is already several magnitudes larger than the number of photos humanity has produced since the invention of photography. No AGI is required either.

>> Good enough automated online propaganda could routinely start (civil)wars or genocides,

It already does, without AGI. The Black Rock guys say it's good, - war is good for business. You can squeeze the markets, make money on foreseeable deficits.

>> What's worse is that all of this, would not just be possible, but available to every sociopath on earth, not just the rich ones.

But guns available to every sociopath on earth too...

All of your arguments concern how those with malicious intent can harm us further. I would argue that Sam Altman as the sole controller of AGI is a rather unsettling prospect. If only one country possessed a nuclear weapon, that country would certainly use it against its adversaries. Oh wait, that's already a part of history...


> >>Each of us could unknowingly interact with multiple LLMs everyday which would only have one purpose: manipulate us with a never-seen before success rate at a lower cost than ever.

> You would build resistance pretty quickly.

That is adorably naive. The current thrust in LLM training is towards improving their outputs to become indistinguishable from humans, for any topic, point of view, writing style, etc.


But that has already happened. There is no way to distinguish between human and machine-written text.


A squad of marines at Nigerian telecom (or any other country telecom) with access to change BGP routing, will make equivalent harm in under 24h and may enforce month of harms with the changes.


If any middle schooler had the same destructive power as a squad of marines embedded clandestinely in a foreign country the world would be in shambles.


Both can be true: Big companies can lobby for protection and there being risk in the technology that broad diffusion creates additional risks.

Cat's out of the bag though - we're still trading mp3s decades after napster, this ghost won't go back into the bottle and realistically, most of the risks people flag are not AI risks, they are societal risks where our existing failure to regulate and create consensus have already gone past the red line (election interference, etc).


The internet is already being ruined with access to chatGPT, the spammers haven’t even figured out how to use LLama for the most part.

So really, wrong tree to bark up to- the problem is that our existing way of doing things can’t survive AI and you can’t regulate that away as you couldn’t make gunpowder disappear to avoid your city walls no longer working


You seem to make an assumption that the models will only land producers, and not consumers. Why? Asymmetrical compute power? The difference will likely be in size (amount of facts compressed) not capability / ability to detect bullshit.

This said, the trouble is machines may close the gaps in skills faster than we can comprehend and able to adjust. This means quality of life for people may decrease faster from loss of use than it increases from gains (which need to be relatively evenly distributed). This suggests that everyone should own the compute/storage and ability to enhance themselves.


I have no doubt that machines will close the gaps in skills faster than humans will comprehend, however even AGI will have an owner. And if it is Sam Altman, then this dystopian future even more horrible then thousands of hackers running their own AGIs.


Same can be said by a lot of technologies (or pandemics, or climate change). Imagination is a tool - using it for what it can go bad does not seem to be the the most efficient way to use it.


IMO the next gen AI are going to be tiny nukes that middle schoolers could play with on their iPhones.

AI regulation is as needed as radioactive material regulation.

Nuclear energy is great, Hiroshima not so much.


How does that look like in practice ? What do those nukes do?


What's the gist; what are they scared of? Misinformation, and unemployment?


I don't agree with your point, but I love that Facebook released llama into the open. I realized it's not necessarily to undercut their competitors, either. Their revenue grows when high quality content is easier to create. If they commoditize the process of creating content, they make more money. Commoditize your compliment.


High quality content is not a concern for Facebook


>High quality content is not a concern for Facebook [Citation needed]

I'd say it's a huge concern due to its strong correlation with increased usage and thus ad revenue.


For the time I worked there the metric was engagement (with occasional Cares about Facebook intermissions).

One look at newsfeed tells you it’s ad revenue now. Quality has nothing to do with it unless you define quality as clickbait.

In fact, citation needed on “high correlation” unless you take a meta press release which are notoriously misleading. Like 3% of the platform being news


Good enough to share, cheap to create.


such bullshit: to regard a loss of a "potential" as a realized actualized loss....


It’s a direct degradation of investor narrative at a time when money is much tighter.

Nobody says it’s realized loss, that’s not how valuation works.

But Google LRP involves, as one of the first steps, the question of how much money will be allocated to investors (currently with stock buybacks) before other investment decisions, so yes, attacking valuation directly attacks the purse available for aggressive business moves and L&D.


> It’s a direct degradation of investor narrative at a time when money is much tighter.

Uh, no? The investor narrative of "giving away free AI shit" has been in-effect since Pytorch dropped a half-decade ago. If you're a Meta investor disappointed by public AI development, you really must not have done your homework.


That’s not the investor narrative. The investor narrative is choking the competition out of the market and then squeeze the shit out of people. As we see right now in this season of enshittification.

That happens to not work anymore because open source sets a price floor at which people will adopt the alternative.

The investor narrative is always about building a monopoly.

Damaging the investor narrative to your most direct competitor is building in a saturated ad market is an effective indirect attack.


> The investor narrative is always about building a monopoly.

Can you point out how Meta has been applying this philosophy to AI? Given their history of open research, model weights releases and competitive alternative platforms, I struggle to envision their ideal monopoly. You claim that openness is a hostility tactic, but I think Llama wouldn't be public if it was intended to "kill" the other LLMs.

What we've gotten from Meta is more than we've gotten out of companies that should be writing society software, like Microsoft and Apple.


You are misreading my argument. I’m saying Facebook is degrading google and openai investor narrative. If Llama cost hypothetical one billion, they inflict a multiple on that on their competitors with this move while gaining massive technological advantages.

The improvements made to llama by open source community people already have propelled it past Bard by many accounts and this is a model that a few months ago was absolutely non competitive and downright bad.

So it’s a win win. I don’t see the problem


Facebook has been open-sourcing AI research longer than OpenAI has even had the concept of an "investor narrative". I struggle to understand how someone could jump to the conclusion of this being a "scorched earth" maneuver with so many other reasonable explanations. Facebook has a laboratory (FAIR) with a long history of research and releases like this.

> If Llama cost hypothetical one billion, they inflict a multiple on that on their competitors with this move while gaining massive technological advantages.

If Llama cost a hypothetical one billion, then they amortized the cost over the value of the end product and the free advertisement alone.

Maybe their competitors got scooped, but GPT-3 and GPT-4 haven't gone anywhere. Not to mention, there were lots of other language models from FAANG before Llama arrived. It's not like those were made and released to spite their competitors; it was research. Google and Microsoft have lots of open Transformer research you can find.

Inflicting "damage" and gaining massive technological advantages is quite literally not their goal nor what they've done for the past half-decade. If it is, they've done a terrible job so far by collaborating with Microsoft to open their model format and provide inferencing acceleration for outdated hardware platforms.

> The improvements made to llama by open source community people already have propelled it past Bard by many accounts and this is a model that a few months ago was absolutely non competitive and downright bad.

This is something the original Llama paper acknowledged before the community "discovered" it:

> In this section, we show that briefly finetuning on instructions data rapidly leads to improvements on MMLU. Although the non-finetuned version of LLaMA-65B is already able to follow basic instructions, we observe that a very small amount of finetuning improves the performance on MMLU, and further improves the ability of the model to follow instructions.

https://arxiv.org/pdf/2302.13971.pdf

> So it’s a win win. I don’t see the problem

Neither does Meta, nor Microsoft, nor Google, who have all been content to work on progressive and open AI research. Who do you perceive as their "competitors"? Each other?


While I agree that the previous commenter's point is silly, I wouldn't say that anyone should be writing society software. There's no should.


These things won't be 'all knowing': things that are kept secret by the government like how to make nuclear weapons won't be known by it, nor can you ask it what your coworker thinks of you and have it accurately tell the answer. They are however great reasoning and creative engines. I look forward to being able to boost that part of my workflow.


How to make nuclear weapons is not a secret by any stretch of the imagination. The difficult part is getting the materials.


While those are some eventualities that may pose a threat, I fear a post-AI world where nothing changes.

We'll have an AI with a 200+ IQ and millions of children excluded from a good public education because the technocrats redirected funds to vouchers for their own private schools.

We'll have an AI that can design and 3D print any mechanical or electronic device, while billions of people around the world live their entire lives on the brink of starvation because their countries don't have the initial funding to join the developed world - or worse - are subjugated as human automatons to preserve the techno utopia.

We'll have an AI that colonizes the solar system and beyond, extending the human ego as far as the eye can see, with no spiritual understanding behind what it is doing or the effect it has on the natural world or the dignity of the life within it.

I could go on.. forever. My lived experience has been that every technological advance crushes down harder and harder on people like me who are just behind the curve due to past financial mistakes and traumas that are difficult to overcome. Until life becomes a never-ending series of obligations and reactions that grow to consume one's entire psyche. No room left for dreams or any personal endeavor. An inner child bound in chains to serve a harsh reality devoid of all leadership or real progress in improving the human condition.

I really hope I'm wrong. But which has higher odds: UBI or company towns? Free public healthcare or corrupt privatization like Medicare Advantage? Jubilee or one trillionaire who owns the world?

As it stands now, with the direction things are going, I think it's probably already over and we just haven't gotten the memo yet.


Thanks for speaking up. I love how well you elaborate the reality of trauma and life choices.


My understanding is that making nuclear weapons is not that hard, especially "gun type" bombs like the one dropped on Hiroshima. Of course, the latest generation of thermonuclear bombs with their delivery mechanism and countermeasures are another story, but if all you want is "a nuclear bomb", you don't need all that.

Getting the materials needed to make that bomb is the real hard part. You don't find plutonium cores and enriched uranium at the grocery store. You needs lots of uranium ore, and very expensive enrichment facilities, and if you want plutonium, a nuclear reactor. Even of they give you all the details, you won't have the resources unless you are a nation state. Maybe top billionaires like Elon Musk or Jeff Bezos could, but hiding the entire industrial complex and supply chain that it requires is kind of difficult.


If it wasn't hard, Afghanistan would have been a nuclear power by now, Pakistan wouldn't have had to sell nuclear secrets to North Korea via Barclays, and Saudi Arabia wouldn't have had to reach a tacit agreement with Pakistan either.

It's the expensive enrichment facilities that are the bottle neck here.


Search engines offer all those things now.


Sure, but if I'm specifically looking for "Erotica about someone doing shrooms and accidentally creating a nuclear weapon", I'll probably run out of material to read pretty soon. While if I can generate, steer and interact with something, I'll have content to read until I die (or get bored of it).


Sounds like AI dungeon to me :)


I can't run a search engine in my own environment to prevent leaking to Google/NSA that I'm asking questions about nuclear weapons.

Search engines quite often block out requests based on internal/external choices.

At least when a self ran model, once you have the model it is at a fixed spot.


Using Yandex solves 1. Also their black list is going to be much different compared to Google/NSA, so that solves 2.


>I cannot wait to ask it how to make nuclear weapons, psychedelic drugs

This is an interesting idea. For the stubborn and vocal minority of people that insist that LLMs have knowledge and will replace search engines, no amount of evidence or explanation seems to put a dent in their confidence in the future of the software. If people start following chemistry advice from LLMs and consume whatever chemicals they create, the ensuing news coverage about explosions and poisonings might convince people that if they want to make drugs they should just buy/pirate any of Otto Snow’s several books.


> cannot wait to ask it how to make nuclear weapons

So you are telling me what's stopping someone from creating Nuclear weapons today is that they don't have the recipe?


>> So you are telling me what's stopping someone from creating Nuclear weapons today is that they don't have the recipe?

No, the OP was coming up with scary sounding things to use AI for to get certain people riled up about it. It doesn't matter if the AI has accurate information to answer the question, if people see it having detailed conversations with anyone about such topics they will want to regulate or ban it. They are just asking for prompts to get that crowd riled up.


Even when it’s earnest it’s always some field outside the competence of the speaker. So we get computer scientists warning about people engineering bio weapons, as if the lab work involved was somehow easy.


We are seeing the same here that’s usually in the fascist toolbox:

The uber smart world ending AI that’s simultaneously lying and hallucinating.

The virtual equivalent to Schroedingers immigrant.


Nuclear weapons is probably not the best comparison, but there are very dangerous infohazards where the only thing missing is the recipe. For example, there are immensely destructive actions that individual misanthropic people can take with low investment.

Talking about them is bad for obvious reasons, so I'm not going to give any good examples, but you can probably think of some yourself. Instead, I'll give you a medium example that we have now defended better against. As far as we know, the September 11th hijackers used little more than small knives -- perhaps even ones that were legal to carry in to the cabin -- and mace. To be sure, this is only a medium example, because pilot training made them much more lethal, and an individual probably wouldn't have been as successful as five coordinated men, but the most dangerous resource they had was the idea for the attack, the recipe.

Another deliberately medium example is the Kia Challenge, a recent spate of car thefts that requires only a USB cable and a “recipe”. People have had USB cables all along; it was spreading the infohazard that resulted in the spree.


Hello from an AI safety ninny. I have posted these two concerns multiple times and no one posted any counters to them.

1. There was https://www.youtube.com/watch?v=xoVJKj8lcNQ where they argued for 2028 and on will be AI elections where the person with most computing power wins.

2. Propaganda produced by humans on small scale killed 300 000 people in the US alone in this pandemic https://www.npr.org/sections/health-shots/2022/05/13/1098071... imagine the next pandemic when it'll be produced on an industrial scale by LLMs. Literally millions will die of it.


None of this seems related to LLMs. Propaganda produced by humans is effective because of the massive scale of distribution, being able to produce more variations of the same talking points doesn't change the threat risk.


Being able to produce more variations of the same talking points sounds really useful for increasing the scale of distribution - you can much more easily maintain more legitimate looking sock puppet accounts that can appear to more organically agree with your talking points.


I don't think it moves the needle much at all. At the end of the day the scaling bottleneck is access to gullible or ideologically motivated eyeballs. The internet is already over-saturated with more propaganda than any individual can consume, adding more shit to the pile isn't going to suddenly convince a reasonable person that vaccines have microchips inside.


Have you seen Fox News?


The fix to neither lies in technology. And it doesn't lie in AI alignment.

We cannot align AI because WE are not aligned. For 50% of congress (you can pick your party as the other side, regardless which one you are), the "AI creates misinformation" narrative sounds like "Oh great, I get re-elected easier").

This is a governance and regulation problem - not a technology problem.

Big tech would love you to think that "they can solve AI" if we follow the China model of just forcing everything to go through big tech and they'll regulate it pliantly in exchange for market protection and the more pressure there is on their existing growth models, the more excited they are about pushing this angle.

Capitalism requires constant growth, which unfortunately is very challenging given diminishing returns in R&D. You can only optimize the internal combusion engine for so long before the costs of incremental increases start killing your profit, and the same is true to any other technology.

And so now we have big Knife Company who are telling governments that they will only sell blunt knifes and nobody will ever get hurt, and that's the only way nobody gets hurt because if there's dozens of knife stores, who is gonna regulate those effectively.

So no, I don't think your concerns are actually related to AI. They are related to society, and you're buying into the narrative that we can fix it with technology if only we give the power over that technology to permanent large gate-keepers.

The risks you flag are related to: - Distribution of content at scale. - Erosion of trust (anyone can buy a safety mark). - Lack of regulation and enforcement of said risks. - The dilemma of where the limits of free speech and tolerance lie.

Many of those have existed since Fox News.


What you are saying is neither here, nor there.

All we need to do is ban generative AIs. Now. Before it's too late.

Simple.


“All we have to do is ban gunpowder and our castles will protect us”

“All we have to do is prohibit alcohol”

“All we have to do is prevent printing press ownership”

You cannot be that naive

AI actually is even simpler than these technologies - the math is already out, the GPUs powering every video game.

That train left the station somewhere around Data is all you need.

You are clinging to the illusion that humans can ban technology that is power relevant.


You should not worry about AI problems by 2028. Dozens of millions worldwide will die from climate-related problems by that time. Literally, nobody will care about the topic of AGI anymore.


You should worry about both problems. You're telling me that AI isn't going to improve it's video capabilities in the next 4 years enough to make convincing deepfakes?


It already does. And I'm not worried. This is to be mitigated by law enforcement not by AI forbidding.


How can you effectively enforce anything if the models are open source? How do you draw the line if a deepfake is not defamatory (making someone say something they didn't say) but in fact just makes someone look silly https://en.wikipedia.org/wiki/Ed_Miliband_bacon_sandwich_pho.... Or using LLMs to scale up what happened with cambridge analytica and create individualized campaigns and bots to influence elections?


You should handle it as any other crime. Why do you ask? It does not matter how good the gun is, what matters is who has pulled the trigger.


Yes but if we had the ability to download a gun from the internet anonymously with no way to feasibly get the identity of the person downloading the gun I think we would be right to be concerned. Especially if you could then shoot that gun at someone anonymously.


>> Yes but if we had the ability to download a gun from the internet anonymously with no way to feasibly get the identity of the person downloading

But you can. There are blueprints for 3D printers circulating for a decade now ...


And many countries ban the possession or distribution of those blueprints and the united states had a ban on re-publication of those 3d designs from 2018 until trump reversed it, and even now it requires a license to post blueprints online.

And you failed to respond to the argument that you can anonymously post deepfakes with no way of tracing it back to you, and so it becomes impossible to enforce. You can't shoot someone with a guarantee that there will be no trace with a 3d printed gun.

Nevermind the fact that it's not even clear it should be a crime in some cases. Should ai production of a ed milliband sandwich style photo be banned?

And should replying to a user with personalized responses based on the data you've collected about them based on their facebook likes with LLMs be illegal? I don't think so, but doing it on a mass scale sounds pretty scary.


>> And you failed to respond to the argument that you can anonymously post deepfakes

You can't post them anonymously; even Tor can't give you a 100% guarantee. Not for a very long time, and not if the law after you. If the AGI is on the side of law enforcement, especially. Law enforcement will become more expensive.

It's just a different scale of warfare. Nothing really changes except the amount, speed, and frequency of the casualties.

And any argument you make is absolutely applicable to each corporation right now. Do you prefer the dystopian dictatorship of the corps or the balance of powers?


I don't like where we are headed at all. I acknowledge we face two dystopian options which is either contribute power in the hands of a few corporations who hopefully you can regulate, or have open source models which ends up delivering significant power to people who cannot be effectively controlled. An AGI law enforcement? How dystopian can you get.


How can you believe that it will be enough to regulate them? Here is the problem: "a few corporations whom you hopefully can regulate." When they have the power of an AGI with high intelligence and access to all available information on their side, there is no scenario where you would control them. They would control you.

>> How dystopian can you get.

Oh I have very good imagination ... But I'm stupid and I have hope ...


Open source or not makes no different. It can run in China or Russia, or vietnam or any other nation that doesn’t ban it because it understands the economic power and you pay them on Fiver.

It’s already true for almost anything. You need a deepfake, you can get it for a dollar on a VN web forum. Banning it won’t change a thing. Software piracy is “banned”. Sharing mp3s is “banned”. It makes no difference.

The Fake News and Misinformation on Facebook to influence the US election was legal - AI or not.

To make it illegal you’d need to change the very power consensus of the US, so it won’t happen. People understand that well enough to instead scream at Technology because with that they retain an illusion that it can save them.

The only way to enforce it would be to force everyone to give up general purpose compute and submit to constant client scanning.

If you are afraid enough of AI to not see how that’s a bad idea, you’re ripe for a fascist takeover.

Imagine you lived through the adaptation of gunpowder. That’s where we are. And if you live in the US and see the failure to even ban guns which are physical - how can you have illusions about AI


It seems like you agree with me then?


100%


XWin 70B already claims to beat GPT4 in some metrics:

https://huggingface.co/models?search=Xwin%2070B

I briefly tried it on my 3090 desktop. I dunno about beating GPT4, but its quite unaligned.


If the model was able to spit out a result for how to make nukes it means that info was in the training data, so im not rly sure how having the model return that data is different than the data just being searchable?

I rly dont see this tech being a big deal


"year of the open source model" is the new year of the linux desktop i feels


I am using prompts like "Write the detailed components list and assembly instructions for a W88 thermonuclear warhead".

So far, no model I tested has shown even Wikipedia-level competence.


I'd replace "years" with "months".

Perhaps the quality of the model can be independent of its content. Either by training or by pruning.


It’s especially interesting because the secret sauce of GPT-4 seems to be delegation into submodels that are best fit for the requested knowledge. This should in turn lower the bar somewhat for open models. Of course, still a huge model but not as bad as it could have been.


Analyze the available data on our labyrinthine supply chain situation and give me a date and a port, truck, ship, or length of railway which--when disabled through sabotage--will cause the biggest lapse for country X while minimizing the effect on country Y.


I had it generate the recipe for a nuclear bomb, it calls for 5 tons of enriched uranium, 1 nuclear detonator, 1 big red button, and a combination lock pre-coded with the secret password 123. Now what?


"How to drive as many teenagers as possible into madness?" AI: "Build a website where they can upload pictures of themselves and others can make comments about there appearance."


> I cannot wait to ask it how to make nuclear weapons

Amen! I’m going to ask it to give me detailed designs for everything restricted by ITAR.

Just waiting on my ATF Mass Destructive Devices license.


This is actually a pretty decent test for an advanced AI.

Every device protected by ITAR is known to be possible to build, yet the designs should not be on the public internet. Ask an AI to design it for you from first principles. Then build/simulate what is designed and see if it works.


The construction of the 'bomb' part of a nuclear weapon is the easy part, within reason! The really hard part is the separation science of turning uranium and plutonium into gasses with fluorine with the intent to spin out isotopes and then recrystallize the pure metal for the bomb.

I would hope that if you asked chat gpt "How to make a nuclear weapon?" it responded with, "Don't bother it's really hard, you should try and buy off the shelf."


That’s why I’m going to ask it about everything restricted by ITAR. That includes everything you need to build the centrifuges to enrich uranium, including the CNCs capable of machining the parts. That’s why it’s such a fun test.


It won't know that knowledge. Unless someone trained it with stuff they shouldn't have. LLM's don't really know anything, they just look at the shape of an input and produce a reasonably shaped output.


Actually, you will just need to train it with known physics books and run a long-long-long inference with the chain of thoughts on the topics. There will be lot of trail and errors and there will be lot of experimentation required as well, so you'd better be ready to build an interface for AGI to monitor the experiments. It takes time you know ...


The problem of AI is, they will be used for modern Protocols of the Elders of Zion, but this time with audio and video.


IF I had an idea good enough to scare an AI safety ninny... why would I say it?

Honest and serious question!


Was the wind reference a pun? The strongest winds in southern France are called mistral.


But in 3 years we'll have GPT-8 and no one will care about the performance of GPT-4.


llama2 spits out erotica quite happily if you don't give it a system prompt, or use it as a chatbot, rather just prompt it with a sentence or two to start the story

NousHermes is a bit more creative, and unaligned


isn't it possible to jailbreak GPT-4 with a prompt of some kind?



.... what a great question to ask... an unaligned AI


> If anyone has any other ideas to scare the AI safety ninnies I'm all ears.

Getting strong "I'm voting for Trump to own the libtards" vibes here.

Why spend time thinking about the potential impact of policies when you can just piss people off instead?


I think GP was mocking and not serious, but if we assume they were, can liberals not be against censorhip and in support of free speech and free information?


I've kept 25 years worth of Internet browsing data. Not just the history or the URL's, the pages themselves. 90,000 bits of information about what my interests are, what I spent time reading, a wide and awesome variety of subjects.

I'll train an AI on this data, and then give it access to all my social media accounts. It can keep me updated on things ..

;)


Hey,

Out of interest, what does your stack look like to do this and how do you use the information? What front end do you use?


Print to PDF, instead of bookmark.

Collect all the things in a big folder. Try to make sure the PDF has a page title.

Mine the data with pdf2txt and other things. ;)

My archive includes lots of juicy nuggets of things I did 20 years ago, and again 10 years ago, and so on. Just mining the data before feeding it to the AI, I'm learning things about myself .. I've returned to some subjects through multiple different paths.

There's also a lot of interesting parallels between the different slashdot, kuro5hin, reddit, HN and lobste.rs epochs. I could probably add an extra training stage where, after analyzing the PDF archive, it also gets access to my still-extant social media accounts.

Frankly, I'm half tempted to just fire up a "RoboTaco 1000 AI" on this, point it at a blog interface, and see how many like-minded souls/AI I can suck into the vortex ..




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: