Hacker News new | past | comments | ask | show | jobs | submit login
Safe AI Image Generation (smbc-comics.com)
237 points by dsr_ on Oct 9, 2023 | hide | past | favorite | 137 comments



Earlier today, ChatGPT/DALL·E 3 refused to draw the bat sign (from batman) for me, due to content policy. When I asked it to rephrase the prompt to circumvent any copyright issues, it came up with "an emblem featuring a nocturnal flying creature silhouette", which made me chuckle quite a bit.

Playing the game where you have to convince the LLM to spit out the secret password[0] was great training for this content policy dance with DALL·E. Isn't it great - even though the machine is now performing the handicraft, you can still feel creative. Not by creating art yourself, but by finding new workarounds around the man-made content policy limitations.

[0] https://gandalf.lakera.ai


Makes me wonder where is the line on copyright? if the bot is refusing but the human is tricking it in creative ways to make something infringing then it seems like the violation is being committed by the user.


Trick question, no one actually infringed anything! You're welcome to draw the bat symbol all day if you want. You can even download the official emblem, scribble out the TM symbol and (c) Warner Brothers text, put your own name and chuckle to yourself.

The infringement only happens when you then try to claim it's your own work publicly. Which has the convenient benefit that a human normally has to be an accomplice to that.


Pretty sure that is not how copyright works.... attribution rights are important but it is not the be all and end all of copyright.


Laws obviously vary a lot, but most places that follow a similar definition of intellectual property rights to EU/NA do not forbid merely being in the presence of a work. Knowing of the details of an intellectual property isn't infringement. You wouldn't be able to monetize it if you were compelled to never let your book printer ever see your story to reproduce it for you, let alone the reading public.

The purpose of copyrights as a concept, is to allow artists to profit from a work that's otherwise easily to duplicate. Laws tend to be written with that in mind. This is normally codified as an exception called "private study" or similar.

You can ask the AI all day long to draw the batman symbol. But if you try to market that as "Chiroptera Guy", then you've taken the step that opens up litigation options for the holders of the original IP.


But at the end of the day, copyright is ultimately about reproduction of a work.

Asking the ML model to reproduce a copywritten work isn't just "knowing" about the work, but potentially an alternative to buying a properly licensed reproduction of the work in question.


The post you responded to doesn't agree with your definition of what copyright is for.

The post claims that its about allowing the author to profit from it. Mere reproduction doesn't stop them, till you try to sell at some scale. It's the equivalent of asking your friend to draw you something.

Now, you're free to disagree. But if you're just stating a different definition, you're not engaging in a rebuttal, just an ignor-al.


I'm correcting them by stating the legal definition rather than the definition they made up.

Copyright is ownership of the right to make copies of a work.

Asking your friend to reproduce a copywritten work is also copyright infringement.


Your definition is legal, sure, but their definition is realistic. The overwhelming vast majority of copyright "infringement" is ignored, because who cares if a kid draws Pikachu? That's why copyright law is very vague and relies on context-specific judgements of severity of infringement on multiple categories.


> Your definition is legal, sure, but their definition is realistic.

...it's an entirely legal term. It has no definition outside of its legal one if you're trying to figure out how litigation would work.

> The overwhelming vast majority of copyright "infringement" is ignored, because who cares if a kid draws Pikachu? That's why copyright law is very vague and relies on context-specific judgements of severity of infringement on multiple categories.

There's a different argument made in that case if it hits the courts than the original poster's argument: de minimis. Basically that the claimed damage is too trivial to waste the court's time.

It's not clear that automation at just about any scale will allow for a de minimis defense.


alright

The world moves on though, I guess.


I mean, there's a couple centuries of case law, explicit US code, and international treaties codifying what I said.

It's not exactly something the world will move on from without explicit legislation and statecraft.


> "The infringement only happens when you then try to claim it's your own work publicly."

Is a very different statement from

> do not forbid merely being in the presence of a work.


In this scenario, the rights you’re leaving out are the rights to reproduce, distribute, and create derivative works. They’re exclusive.

You’ll find reputable t-shirt printers will refuse to print designs they know to be copyrighted.

No one would be claiming the printer or you came up with some copyrighted design, it’s the copying that’s an infringement (“copy”-right!).

Now, these printers also typically have you assert that you hold the copyright to the work, but my understanding is that just limits their total financial exposure by contracting with you; them producing the work would still be infringing.

Here, you aren’t even giving the bot a work to reproduce. It holds the work in its storage and will reproduce it for you when asked. That’s pretty cut and dry copyright infringement by whoever controls the bot.


The copyright infringement happens at the moment of production of the image, unless the use counts as fair use. It doesn't matter whether or not you claim ownership or authorship.

Home copies are protected primarily by the total impracticality of enforcement, and secondarily by claiming you're doing it to practice making art.


The line is a million light years from where OpenAI has positioned this.

Drawing a Batman logo can never be copyright infringement, the art belongs to the artist (or in this case, there is none because there was no artist). It would be like Hacker News prohibiting me from doing an ASCII art Batman logo in a comment on copyright grounds.


It is pretty funny that openai arguably is based on enormous amounts of copyright infringement (depending on if scraping training data is fair use or not), but they're kind of being prudes about something as clearly benign as this.


Nothing funny. It's similar to how YouTube was built on massive copyright infringement, on a scale unmatched by anything except perhaps LLMs now, and then turned around to become the Fully Automated Fanatic Copyright Enforcer we know it today. It's not hypocrisy - it's this, or getting sued out of existence. They pulled a fast one and got away with it, because MAFIAA figured it's more profitable to extract money from YouTube at legal gunpoint, instead of killing it and losing a distribution channel they can't maintain themselves. Here, OpenAI is trying to get ahead of the whole generative AI copyright kerfuffle, hoping to avoid getting sued if they demonstrate they're eager to work in the interest of copyright holders.


Very much like how YouTube goes well beyond what is required by copyright law.


Youtube policy is actually pretty lenient compared to most IP rights law. "Fair Use"/"Fair Dealing" actually has a pretty well-litigated definition in most jurisdictions. YouTubers just dislike the definition.

If YouTube exposed you to the full force of most jurisdiction's IP law, you would be getting C+Ds and summonses from scary people in suits who work for Viacom. "Fair use" isn't a magic incantation to ward off evil lawyers, it's a defence with definitions and restrictions used during a dispute.

The copyright strikes are a pretty good compromise to shelter individual creators from the outside world that's used to Big Company 1 dealing with Big Company 2.


> YouTubers just dislike the definition.

That's not it at all.

The problem is that they use opaque algorithms with false positives to issue strikes. Or content companies illegitimately add public domain materials to their libraries, which causes strikes against anyone lawfully using those materials.

> If YouTube exposed you to the full force of most jurisdiction's IP law, you would be getting C+Ds and summonses from scary people in suits who work for Viacom.

They can still send you those things if you infringe their copyrights regardless of what YouTube does. They generally don't because random individuals generally don't have enough money to justify their legal expenses and it's incredibly bad PR.


The opaque algorithms and trigger-happy strikes are to appease copyright holders, which leads into...

The reason they don't litigate individuals is low ROI on litigating a small YouTube channel. That is true. However, Youtube without these tools in place would be sued out of existence. Copyright holders wouldn't ever come after 1 video, they would go after the platform hosting their IP without compensation. With every legal right to do so!

Youtube cuts a good deal for them where they get god-tools, and in exchange, YouTube can continue to exist.

Never said it was a good set up, but this idea that YouTube has another "easy choice" is pervasive.


YouTube is protected under the dmca safe harbor provisions. All they're required to do under US law is respond to DMCA requests. Content ID goes far above their legal requirements.


That has to be litigated to find out.

Cases like Napster [0] and Grokster [1] set precedence for this sort of thing. They also used the DMCA in their defence, and they were still found culpable. These were decisions that were scoped by the method of delivery, so if YouTube was going to stop giving everyone a great deal, they'd be betting they have a solid case as to why their technology is legally distinct from those examples, and many others. (And conversely, like instances where the safe harbor clause has been used in a successful defence)

That's a weird bet to take when the benefit is... people stop complaining about YouTube Content ID, and find something else to complain about?

[0] https://en.wikipedia.org/wiki/A%26M_Records,_Inc._v._Napster....

[1] https://en.wikipedia.org/wiki/MGM_Studios,_Inc._v._Grokster,....


The Napster decision left the DMCA question open as it didn't affect the actual appeal, and was instead left to be decided at trial (which never occurred):

> We do not agree that Napster's potential liability for contributory and vicarious infringement renders the Digital Millennium Copyright Act inapplicable per se. We instead recognize that this issue will be more fully developed at trial.

https://scholar.google.com/scholar_case?case=141026963365506...

The Grokster decision fails to mention the DMCA at all.


If the photo industry is any indication, it won't really matter where the legal line is. They'll put in enforcement mechanisms wherever the industry feels like they should be.

Nothing stops companies from going beyond what the law requires.


I was trying to do a halloween themed image that included 'chopped-up, bloody body parts' which kept hitting the content policy. Basically any chopped-up or bloody combination with hands and feet etc. etc. hit the shield.

I didn't try hard to break it I just found it funny that the result I could generate in this particular graphics field was most like really good clip art suitable for a company party poster.


Use strawberry jam


The coming linguistic and cognitive changes are going to be amazing to live through. We're being oppressed by talking Roombas. We study our oppressors; we adapt. They understand ape language better than we apes ourselves, they are better apes than apes, they are super-apes–but yet even still we find small gaps in their ape-imitiation, nuances, metaphors too subtle and too monkey for the metal machines. Voight-Kampff tests. Simian shibboleths. The dead monkey is covered in strawberry jam; the steel Roomba gazes unblinking, and all is quiet.


And mannequins



I don't want "safe" AI, I want "accurate" AI. If I want a picture of NYC getting nuked, how can the AI know in which context it will be used? Maybe I'm writing a sci-fi book and looking for cover art ideas.

Who is driving this big push for "safety", anyway? Do consumers actually want safety or are a concern-trolling vocal minority pressuring AI corporations to kowtow?

I personally hate it when SV moral arbiters nanny me.


You should know that your position is societally untenable.

Excellent example: in the early days of Reddit, they very much had a "you can post anything that is not illegal" policy, and this led to subreddits like jailbait, fatpeoplehate, etc. Around 2012 I think (someone can look up the exact date) there was a coordinated effort to shine a bright media spotlight on the "underbelly" subreddits that made a ton of Internet and national news, e.g. features by Anderson Cooper on CNN, prominent op eds in NY Times, etc. Reddit changed course and specifically banned sexually suggestive pictures of minors and those without consent.

I have an opinion, but I am not arguing here for whether the decision was "right" or "wrong". I am simply pointing out that Reddit had 0 choice in the matter. If they had held firm with their "anything that's legal" policy, they would simply not exist today. They would have been deplatformed to the nth degree, and furthermore laws would have (and actually have, e.g. in the case of revenge porn) changed to make their existence untenable. E.g. platforms like Reddit can't exist without Section 230, which is already under attack and I guarantee would have been removed if you had lots of platforms saying they're fine with people posting unknown, sexually suggestive pictures of minors. Nevermind having 0 advertisers willing to pay their bills.

I'm not crazy about "SV moral arbiters" either, but I don't like posts like yours because they pretend a reality that doesn't exist.


> I'm not crazy about "SV moral arbiters" either, but I don't like posts like yours because they pretend a reality that doesn't exist.

I`m not OP, but I don't like posts like yours because they pretend that current status (not even status, your idea about current status) will never-ever change.


> I`m not OP, but I don't like posts like yours because they pretend that current status (not even status, your idea about current status) will never-ever change.

I'm all ears listening to suggestions for how to realistically change the current status, I just haven't heard any reasonable ones that are actually tenable. Also, while I think there are plenty of annoyances with the current status, I rarely think they are as catastrophic as their detractors make them out to be.


I mean to address your questions we have to answer more complex questions of what society and culture is.

At least in my thinking, which may be incorrect, is that you cannot have a society that allows everything. Because allowing oneself to be destroyed is in the set of everything.

I'm not sure how your idealism doesn't fall foul of the paradox of tolerance?


Right, we shouldn't lie to ourselves here.

The corporate-controlled image generation models will block sexually suggestive images and gore.

They'll also block images of Xi Jinping and Mickey Mouse.

It'll still be good enough for a lot of applications; that guy photoshopping together an ad for Mountain Dew doesn't need images of Xi Jinping anyway.


But these aren't really the successful models that further creativity. On the contrary, we see a buzzing community that create thousands of models and they specifically chose base models that are not lobotomized because they simply provide better results.

So you might not want to stifle that with corporate control.


I heavily doubt they would have deplatformed and I don't think there ever was a pandemic of CSAM on reddit. This is a mischaracterization usually the argument to enact greater control.

Could be that reddit did indeed have no choice here, there was a media campaign against more open platforms.

The situation is indeed tenable if platforms just do not cooperate with external pressure of content moderation. There are such platforms and they still operate today.


They did have a choice though. They could have built a federated system in which they as the developers have no ability to censor.

If each subreddit could be hosted by its moderators, you can't apply pressure to the developers because they have no control over it. You can apply pressure to the moderators hosting that subreddit, but they don't care more about ad revenue than keeping their subreddit up because if it's not up there's no ad revenue.


Ah you mean that time before reddit was entirely hivemind?

The most valuable information on the Internet will always come from the chans.

You just need to be able to evaluate everything with critical thinking.

You know, that thing where you can reason about something without necessarily agreeing with it.

Please don't interpret this as bait. From my perspective a vast majority of people have embraced outsourcing their reason.

Things like fph exist exactly because people are told day in and day out that diabetes is healthy. Its perhaps the most ugly manifestation of the natural reaction to this polar opposite called body positivity. But banishing thought doesn't eradicate it. It simply validates all those driven away.

Sadly that is something vitally absent from borg based Internet.


> Things like fph exist exactly because people are told day in and day out that diabetes is healthy.

I don't believe this deserves any serious consideration.


Some anonymous Swedish guy said that a virus was coming early next year in September 2019 and pleaded with people to not take the vaccine that would come in late 2020.


I don't know what point you're trying to make. The baseline rate for schizophrenia means I should expect about 24 million people saying equivalent things about each year.


That's as stupid as me assuming you were schizophrenic and ignoring your argument.


You have still completely failed to communicate whatever point you were trying to make. I have absolutely no idea what idea you were attempting to convey.


This was on 4chan. See link below. Just showing the value of 4chan.


If you think that shows value, that just shows you don't know value.


Oooh a one liner fallacy. You are le bad ass intellectual.


Well what was the value? The prediction was wildly wrong, supposed to come from "a pharmaceutical company working with military op's in a west coast state." That's about as far from China as you can get.

And the conspiracy theory that the vaccines are the real toxic part is just dumb. We know from hospitalisation and ICU figures that people were getting sick well before a vaccine was delivered. Plus, it requires me to believe that everyone in NHS Scotland was lying to me for completely unknown reasons.

In order to believe in these theories I am required to shut off my brain and not think about what happened in other countries, other health services, even though there are plenty with full information in English. It's just daft.


do you have an archived link?



wow that's wild man. definitely expected this to not get delivered


AI corporations do what corporations do, which is try to avoid lawsuits. Or worse: the well-known, incredibly arbitrary extrajudicial punishment that can be summarily meted out by the duopoly of payment service providers.

Secondarily, they want to avoid bad publicity.


here's what i speculate (i'm not an insider or anything, pure speculation).

the "true believers" at OpenAI mostly don't care if bad images get generated. they are worried about "safety" as in the sci-fi Terminator scenario, not "safety" as in "avoid harming people with offensive or unpleasant images".

however they see controlling a cutting edge AI towards some goal to be an important thing to study, and they want to practice doing it, and to do that they need to pick some arbitrary goal.

the arbitrary goal becomes this type of "PG-rated-only" censorship, because it helps avoid bad press coverage, makes it easier to raise money, etc. but they don't sincerely care about it. some others in tech do sincerely care though.


I think this is a marketing push to sell false security. There are some believers in the supposed AI apocalypse though.


I think it's just them being scared of models running on their servers producing things that cause bad PR. And the reason they're not willing to not run things on their servers is competitive advantage as well as just extracting more money out of users that way. (I definitely don't think actual safety is the concern.)


> I personally hate it when SV moral arbiters nanny me.

I've been annoyed with them pushing their mores on the global internet since at least 2010.

I suspect you'd also hate the substantially different mores that I would have in their place.

> Who is driving this big push for "safety", anyway? Do consumers actually want safety or are a concern-trolling vocal minority pressuring AI corporations to kowtow?

1. Yudkowsky, whose general vibes are an important part of the discussion for about half the people who work on these AI in the first place, even where they disagree in particulars.

2. Anyone who noticed the way biases in training data propagate stereotypes, an observation which substantially predates any of the currently interesting generators.

3. Anyone who has been on the receiving end normal old fashioned inappropriate content, or who is the parent or guardian of such a person.

4. Also the usual concern trolling types, as some people have already been arrested for using such models to sexualise specific people including, indeed, at least one case where it was a minor.

5. Anyone who can see the potential for these models in automated personalised propaganda.

6. Anyone concerned with the potential for a fully automated system that A/B tests with a constant stream of newly generated output until it finds a super-stimulus you can't help but engage with.

These groups don't all talk to each other, and in many cases dismiss the severity, likelihood, and timescales of each other, though often still using overlapping language that makes any conversations on these issues even more difficult than figuring out exactly what someone who just used "woke" as a pejorative is actually objecting to.


Moral panic is driving it. Every time somebody manages to make AI bot say something outrage worthy, all the press is throwing a fit - how dare you, think of the children!!! And since the western culture is ruled by concern-trolling vocal minorities now, corporates predictably kowtow. Nobody wants to be canceled, or paraded before Congress as somebody who let the evil robots destroy our children for profits.

SV has nothing to do with starting it though, they just follow. Look at your newspaper, college campus and talk show to find the leaders and the fanners of the flames.


If "unsafe" AI becomes ingrained then it really will be banned or have some arbitrary artificial limitations on power ("maximum 100,000 parameters" or whatever).

They are trying to leapfrog the inevitable backlash from governments by saying they are doing the right thing and take it seriously yadayada.


I see this comment over and over and over here on HN and it really blows my mind that people don't get it. These models are made so that enterprise clients will pay a lot of money to use them. I was a director at S&P Global and if you brought in software that could make pornographic images, violent images, or introduce copyright violations into our research you would be laughed out of the room within exactly 1 minute. The push for "safety" is that we live in a capitalistic society and the people will pay the most money for access to these models will want them to be "safe". If you don't like it, go ahead and create a competitor and see how many sales you get to enterprise customers.

SV isn't trying to be your nanny. They are trying to make money. I'm shocked this isn't obvious to well-educated people who visit this forum.


> These models are made so that enterprise clients will pay a lot of money to use them.

That's not obvious to me. Why wouldn't they eventually be targeting consumer market like search engines?

Speaking of which, why don't search engines like Bing and Google forcibly censor pornographic queries? Why am I allowed to search and view pictures of Xi Jingping juxtaposed with Winnie the Pooh on Bing and Google without my hand getting slapped? Why are search engines exempt from getting roasted by the media for serving up inappropriate results in response to inappropriate queries?


There is clearly a market for a content filter.

Why is there a market for the inability to turn it off?


Payment processors and credit card companies is why.


Don't OnlyFans et al accept credit cards?


OF is actually pretty dang conservative as far as adult websites go. They have a ton of restrictions about what text (!) is allowed to be posted.


It is possible to write pornographic literature using Microsoft Word — and many do


Please try to assume good faith here. Obviously MS Word does not write the pornographic content for you.


Maybe just be less prudent about nudity in general. And deny kids access to AI image generation. Problem solved.


Right? It's almost like, as adults, we shouldn't be limited by the existence of children. Like businesses should assume parents are going to keep their kids from doing things bad and let those of us who will never have children just do the things we want to do.


"Censorship is telling a man he can't have a steak just because a baby can't chew it." ― Mark Twain


From one side we are trying to stop 13 year old boys from making pictures of AI boobs, and from the other side our school system is teaching children about sex before they even reach the age of 10.


"Have kids and deal with it, don't bother us or the society in general" feels like a good slogan to encourage parenthood.


Because they may use it to generate sexual images? Kids have two ages: "Don't care about sex" and "care about sex". If they've gotten to the point where they care about sex, your AI image generator filter isn't going to put their hormones back in the box.

It's mystifying to me why American culture wants to insist that sexual urges must come the moment you're 21, and not a second earlier, otherwise you're somehow sinful.


18

21 is when they allow booze.


Well, the age of consent is 13-16, depending on state, but it doesn't affect the broader point.


and many states circumvent that in the judiciary, we have child marriage to undermine other concepts like pedophilia, statutory rape here just like in developing nations

its actually quite rampant, possibly more in absolute numbers than the countries we think about, we just chose not to focus media or civil rights attention on it

https://19thnews.org/2023/07/explaining-child-marriage-laws-...

https://www.cfr.org/blog/its-time-end-child-marriage-united-...

https://www.politico.com/news/magazine/2022/01/09/cassie-lev...

pick whichever source you respect more, I’m not that invested


I'm really curious about how a society would look like when kids are properly shielded from harmful resources until they're adults, so:

- no social networks

- no uncurated wikis or online forums

- no AI

That's quite a crash when they enter adult life and have to adapt to all of that basically on their own.


Add to that:

- no sex ed

It really feels like there are a lot of parents who think they can keep their kids safe by keeping them dumb about how their bodies work.


I guess the converse is having to educate teens about the internet the same way I, at school, was educated about sex[0] and drugs[1].

[0] entirely heteronormative, Section 28 applied; also mostly Catholic, but rather more secular than you might expect from that description. First I even saw the word chlamydia was after I finished the GCSEs and went to a secular place for my A-levels.

[1] and not rock and roll. The drugs were booze (drink responsibly), cigarettes (expensive cancer sticks), and all other drugs collectively under the banner "will destroy your life and/or kill you" (which turned out to be a whole mass of outright lies, for example "dealers trying to get primary school kids addicted to LSD" and basically everything they told us about the death of Leah Betts).


How do you deal with nonconsensual generation of images of real people in sexual acts, including children?


Thoughtcrime should not be a thing. We can make laws for distribution of AI generated material


OpenAI uploading an illegal image to your machine is more than a thoughtcrime. They become the person distributing this content, and there are already laws for that.


Exactly how we deal with it now. Laws against the specific undesired behavior.

Burdening these systems early on with legislated regulations only makes it so that the incumbents win. Regulations are a moat.


Luckily for image generation they did not win. Different situation for LLMs though. You are correct that it would only entrench positions.

I am angry at EU legislation. While it was heavily involved in research, legislation are again on the course to fortify dominance of established big tech players. Probably because their interests often align with media interests, which have a huge influence on EU politics. And I do not mean the open and independent part of media, rather the established players that have direct business relations with huge tech firms.


Prosecute people who distribute this stuff?


We don't go through the judicial system here.


How do you deal with the same in Photoshop?


>> How do you deal with nonconsensual generation of images of real people in sexual acts, including children?

> How do you deal with the same in Photoshop?

You're making a false equivalence. "AI" image generation is so much easier to do and requires so much less skill than Photoshopping something that you're really dealing with an entirely different problem.


> You're making a false equivalence. "AI" image generation is so much easier to do and requires so much less skill than Photoshopping something that you're really dealing with an entirely different problem.

And Photoshop is so much easier to do and requires much less skill than carving marble statues.

I say we also ban Photoshop and only allow chisels and stones.


> And Photoshop is so much easier to do and requires much less skill than carving marble statues.

You are frankly missing the point. The issue isn't easier, the issue is crossing a threshold of easiness or scalability that means the problems can't be managed the same way as they were in the past.

For instance: the problem of revenge porn was severely limited before cameras existed, before the internet made mass distribution of photos effortless, and before image search engines made them easy to find. Digital cameras + internet distribution + search engines removed technological limiting factors, and made the problem significantly more severe and widespread. That change in severity lead to new legislation.

But with digital camera photos, there are limiting factors that make individual-infraction level enforcement possible: the perpetrator needs some physical access to the victim such that the victim usually knows who took the photo. Generative AI even removes that limitation.

That might no mean anything should change, but it does mean "LOL. I can find similarities between the processes so they're the same," isn't a reasonable response.


It's sad that people seem to be consistently incapable of understanding this nuance; same with the "but humans also learn by studying existing work!" argument that always come up when discussing the ethics of training and datasets.


The difference of nuance vs Photoshop is completely unclear.

Back in 2005 or so, kuro5hin (a now gone discussion site) closed signups because somebody photoshopped the founder's wife's head onto some porn. That was 18 years ago.

True, doing it with Photoshop took a bit of skill, but it is a skill a lot of people have, for whom it would be doable in minutes. For a newbie, figuring out Stable Diffusion is probably more work than figuring out how to do it in Photoshop.

And IMO the training argument is long term a pointless waste of time.


> The difference of nuance vs Photoshop is completely unclear.

It's perfectly clear if you realize similar is not same.

> Back in 2005 or so, kuro5hin (a now gone discussion site) closed signups because somebody photoshopped the founder's wife's head onto some porn. That was 18 years ago.

So what? Everyone here understands that's possible. If you think that example somehow addresses the concern, you missed the point.

> True, doing it with Photoshop took a bit of skill, but it is a skill a lot of people have, for whom it would be doable in minutes.

Also, IIRC, that particular Photoshop of Rusty's wife was terrible, as in obvious.

The skill "a lot of people have" is to make bad photoshops. Generative AI has the ability to near-effortlessly make high-resolution ones that most people could confuse for a real photo.

> For a newbie, figuring out Stable Diffusion is probably more work than figuring out how to do it in Photoshop.

Again, what quality can a newbie achieve with photoshop after a couple days effort? And how long will Stable Diffusion be hard to setup? You do realize someone's going to come up with an easy-to-run "revengeporn.exe" sooner rather than later?


> Again, what quality can a newbie achieve with photoshop after a couple days effort?

The answer is a better quality than if they are trying to figure out stable diffusion.


> The answer is a better quality than if they are trying to figure out stable diffusion.

Prove it. Give some newbie Photoshop and a week of time, and show me how well they can Photoshop the face of a particular person (say Tom Vilsack, Secretary of Agriculture) onto some porn.


I've read many of your replies, and while I /think/ I understand your point, I am not sure what you're proposing.

Is this a reasonable summary?

> AI = easy, so it should be regulated. It has passed the "threshold of simplicity" (and realism) where new legislation should be enacted.

> Photoshop = harder, so it should not be legislated.

If so, what happens when Photoshop releases a "copy/paste a face" feature (a desired general photo editing capability) that uses GenAI to merge background, skin tone, lighting, etc.? That could easily be a beginner-level feature (ctrl-c/ctrl-v with auto-segmentation) and be used to create porn.

Are you proposing that the feature be regulated because it's become too easy? That artificial barriers of difficulty be implemented?

Again, what are you proposing be the outcome?


> Prove it.

Literally all some kid has to do is use the Photoshop magic wand feature to copy someone's face and post it on another image.

High schoolers don't care about the difference. They'll harass the target just as much if those kinds of pictures show up around school.

And that would only take a couple minutes to produce.

High schoolers don't need a 3000$ gaming computer, and the skills of figuring out how a GitHub repo and install process works to harass their fellow students.

You are entirely confused about what the problem is here. Slight differences in technology are not the cause of the problem of sexual harassment.


> Literally all some kid has to do is use the Photoshop magic wand feature to copy someone's face and post it on another image.

Do it, show me the results.

> High schoolers don't care about the difference. They'll harass the target just as much if those kinds of pictures show up around school.

You're moving the goalposts. I don't care if highschoolers will run with obvious, low quality crap. In fact, throughout this whole conversation, I didn't have highschoolers harassing each other on my mind at all. I was thinking about the more general problem, which includes things like harassing exes and potential employers coming across pictures during a job search.


> I don't care if highschoolers will run with obvious, low quality crap.

> which includes things like harassing exes and potential employers

You are completely missing the point. Also, your repeated requests that I engage in sexual harassment is weird.

The damage of sexually harassing messages being sent to your friends and families can be just as damaging regardless of whatever small quality issues that you think exist.

You are confused about what the issue is. People are still significantly harmed via this harassment, even if there are quality issues. The "accuracy" isn't the determining factor here. Instead, it is the sexually harassment messages and images being spammed to people surrounding the victim.


> It's sad that people seem to be consistently incapable of understanding this nuance...

It's a problem software engineers (or some superset that contains them) seem particularly prone to.


Something something binary thinking


It's not a matter of understanding; people just disagree.


The same way you deal with people photoshopping real faces onto nude bodies.


I'm not asking for a for the children blob of legislation here, everyone.

But I do see mass-scale generation of nonconsensual images, well beyond individually commissioned works -- and actual CSAM blending in with it because it becomes indistinguishable -- as a real problem.

We're having nuanced discussions about how to deal with misinformation, including mass-scale generation of it from LLMs. Why can't we have those discussions about AI porn so that we can come up with solutions that are somewhere between "change nothing/don't be a prude" and "go full authoritarian"?


I mean if you want to have that discussion then how do you differentiate fictional death and murder in movies and the written word vs AI generated nudity?

To me, it seems like we quickly get into the AI image equivalent of prosecuting the actor in a movie for committing the murder of a fictional character. We even go as far to have the entire genre of horror movies at mass Hollywood scale based around fictional death and murder.

It gets even stranger with sex and nudity when such a large % of people are online porn consumers. I have seen the figure for men at 90%. It is where I think the market will actually figure out the solution and that the porn consumer will want assurance the person is real and not AI.

What is disingenuous in this debate to me is that chatGPT estimates there might be over 100 million pictures of real naked people on the internet as of 2021 lol. It is like we are pretending these pictures don't already exist of real people at a huge scale and that no one really looks at them at huge scale now. It is only when AI starts cranking them out people will be so tempted and THAT is a big problem.

If anything, this is the much bigger problem that we can no longer discern the magnitude of a problem in the real world vs the R0 of the idea of a problem online.

"Since everyone is posting about AI generated nudity/porn, it must be a big problem."


For context, I was referring to using someone's likeness or CSAM. Both of those are significantly different than fictional characters portrayed by consenting actors.


How do you deal with that now, when a sufficiently-skilled artist can do the same for anyone?


You don’t have deal with it very much because the barrier of finding a sufficiently skilled artist who is motivated to do so is quite high compared to the reward. If the barrier is simply typing a few words into a web app however…

Spam email and annoying door to door sales might look like similar problems, but they demand very different solutions.


I knew this would be the counter. The parent said nothing about scale. We already have laws around this kind of thing, the technology used is irrelevant to the application of the law. Murder is murder whether it's with a gun or poison.


But scale matters, because usually the law doesn't get involved until scale is reached. We didn't used to have anti-spam laws because it wasn't a problem at small scales.

Then it became easy for everyone and became a problem, so laws were created specifically to combat electronic spam, because current laws didn't cover that situation.


> Murder is murder whether it's with a gun or poison.

But we have laws specifically for weapons of mass destruction, instead of just suing those who use nuclear bomb with murder


Money, effort, morals.


I wouldn't mind having my face deepfaked into a porn or whatever as long as there was a small watermark or something that indicated it was fake.

I think enforcement should be focused on disclosing what's AI generated/fake, not on banning altogether.


Rule 34 of the Internet. Trying to stop it is futile.


You decide everything is either good or bad with no shades of grey, declare yourself a free speech absolutist and tell everyone that however bad that is, censorship is worse.


We already have the laws we want, more or less. It's just that the barriers to creation have been lowered.

If we encumber the tech with regulation, only the giants will be able to engineer the adequate protections that fit the letter of the law. You'll effectively be choosing the winners and creating a massive barrier to entry for everyone else.


Free speech absolutists, an absurd insult for that matter, usually have the exact opposite position and leave the question of morality open.


Anything can be objectionable if you are creative enough.

Several times he (Chaitin) compares the process of solving mathematical equations to making love to a woman. It's creepy. [0]

[0] https://www.goodreads.com/book/show/249849.Meta_Math_

If you want to read a bit more from the mathematician himself on this very topic he wrote an accessible "pop-math" book about it, "Meta Math!: The Quest for Omega" though you'll need to look beyond the author's rather strange choices of metaphor. It's been a while, but I assume you're referring to the similarities drawn between information theory and sex. [1]

[1] https://news.ycombinator.com/item?id=22099178


Not just this. Gpt-V could violate social conventions massively. Imagine giving teenagers AI which would judge them and each other.

"How old does this person look?"

How about this couple, how do they match?

How is this person's style? What mistakes are they making?

How healthy does this person look?

Do you think this person drinks a lot?

It's not just high level politics. There are huge local social conflicts going on all the time between colleagues, lovers and potential ones, neighbors, friends, towns. AI can work as a social weapon in those battles too.

We leak a huge amount of info all the time, and AI Vision will try to pick up on it, rightly or wrongly. And talking about what one notices is very socially uncomfortable.


When does it just become "opinion" though?

I can ask humans the same question, but I don't assume any of it is "truth".

People can be nice or be dicks about stuff, and people readily make judgements just on appearance. I see no reason to treat AI differently in that regard.


AI is available to all and very consistent.

If there were a universally known and evaluatable real-person judge which all teenagers could consult, I think that would be an issue too. Maybe one reason it's so significant for humanity is that real-world judges are never really trustable. i.e. in elementary school, everyone looks up to different people, maybe a coach, and older brother in middle school, etc but there is diversity, and you can challenge opinions about who is cool, stylish, a loser, nice, etc. There really isn't much of a unifying, judging force between groups even in the same school, let alone city to city.

With youtubers etc we got a new mechanism for delivering (generic) opinions to masses of people and unify their views that way. i.e. pewds can demonstrate an attitude and way to be to every kid in the country. or a fashion tiktoker can spread very niche styles etc. But there was no input.

But now imagine if a makeup-teacher tiktoker also had a custom GPT-V bot with a real voice, which could judge your images or live video, give you points, praise or condemn you, rank you among your friends, etc. and you couldn't hide it or run away. That seems pretty big, actually.

Imagine watching some goth channel in the 1990s but... you could literally talk to someone elite and famous who made the music and art... and hang out with them, absorb their views, be judged, network, pay, etc. That would be pretty powerful.


I dont see how AI algorithms making judgements on appearance is any different from people doing it. My point is that it is no more fact when an AI says it vs a human.

If you are offended if a random AI says you are ugly, then you'd likely be just as offended if a random human said the same thing.


I tried removing a sign post in front of a photo of my car in Photoshop with their Firefly AI, and it refused to produce a result if I mentioned the model of my car.

I can't imagine a situation where Adobe could get in trouble for generating an image of a Prius, and it doesn't give you a reason for refusing-- but the possibility the AI was trying to avoid a word that sounds ever so slightly similar crossed my mind.


It does make for a good comic so I won't criticize it on that basis. I just wish we could stop with the vagueness of "make image generation safe" which is completely meaningless and instead phrase as a concrete problem like actual engineers.

Something like, "AI generated images present a difficult challenge for applications that require that sensitive content be removed (e.g. Tumblr) or hidden (e.g. Discord) due to the fact that the abstract nature of the generation can trick most digital 'eyes'. To that end we've invented new measures based on already established content warnings and created a large corpus of prompts that have semantic relationships with those warnings. We've asked our labelers to measure content in the context of those content warnings; for example, an image might be labeled "is of a sexual nature", or "is of a violent nature." These responses are summarized as the scores on Figure 1. Going forward we will be making continuous improvements to our moderation API to identify such images with higher accuracy and future models will be tuned to generate them with lower frequency. We understand that a portion of our existing user-base desires the ability to generate sensitive images specifically but it is our goal as a company to build a product that can be deployed in a wide variety of applications without the need for human supervision. At some point down the road we may look into releasing models that cater to that use-case but at present our efforts are elsewhere."


The entire point of the comic IMO is that safety is inherently an ideological/cultural construct that will vary greatly across populations, in the same way that even something as seemingly unambiguous as sexual imagery can be interpreted through multiple lenses, depending on your kink.


Part of the reason this doesn't happen is that the word "Safe" smuggles in some ideas that an average person might object to alongside many unobjectionable ideas. For instance, "Safe" might mean "enforce a certain distribution of characteristics like skin color or apparent gender" or it might mean "don't make child pornography".


Especially in a context where we have increased security policies for decades now while simultaneously living in the safest time there ever was. People even start to make up dangers which in some instances can only be explained as a pathological condition.

And no, these security policies had no hand in increasing security if you look at the actual data. They were mostly political to alleviate imagined fears.


Always appreciate Mr. Weinersmith's humor.

As a side note, I'm curious if JEPA will help to overcome the current generation's foundational model limitations by using a reasonable state of the world.

Link: [0] https://ai.meta.com/blog/yann-lecun-ai-model-i-jepa/


Well, while we are here, I think this one is way better: https://www.smbc-comics.com/comic/teach


Nice.

Another very relevant AI comic.

This argument has gone back and forth over many posts. Lot of people arguing that AI can't be conscious, but then stumble to explain how. How computers can't be, and how humans can be.

https://www.smbc-comics.com/comic/conscious-6


It's hard to define consciousness itself. It may be something wholly apart from sentience and sapience.

So it's kinda already setup to be something arbitrary. It may very well be a case of "it's conscious if it's sufficiently indistinguishable" from human consciousness.

Anywhere something is arbitrary, it's just going to be endless arguments from every "side".



But why does Sam Altman have a beard and a hat? I don't think he'd be caught dead in those.


Sam Altman is being called on the phone; he's not the guy in the hat.


Wait, he's not the giant guy made out of metal and drawn poorly with pincer-like hands?


Am I Sam Altman?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: