I think regardless of the specifics, using generative AI to manipulate people into gambling is predatory and immoral. It's psychological abuse to manipulate people into dwelling on how their lives are awful, and on how purchasing the advertised product could fix all their lifelong problems (which it almost certainly won't).
Some of the worst "abuses of AI" are going to be things we've already fully normalized—we just fail to reflect adequately on our culture and in what ways it's broken.
While you're right, the AI part is almost irrelevant. I think it's immoral that states can advertise their lotteries full-stop.
True, the AI part will make this worse, especially if states get into really targeted advertisements. But the morality line is already being crossed daily.
Can I take this a step further and say that advertising is the true problem here? I tend to think most things should be legal, but advertising most things should be banned.
It seems that society can't delineate between accepting a practice and promoting a practice.
What would you recommend as an alternative to advertisements? I do think a lot of ads go too far, but I think some form of advertisement is necessary to get new awareness of new brands and products. Otherwise only the pre-established large brands and products will ever get sales, which would lead to a lack of competition and all the negatives that come with that.
It sometimes confuses me how people seem to not think that word of mouth is a thing. It absolutely is. In the games industry, there are endless examples. Because on places like Steam, you generally only get promoted if you're already doing well - yet there are countless games that flop at launch meaning 0 visibility, yet they also see great long-tail success.
One of my favorite is Kenshi. [1] Within a few months after launch, the dev was left with literally tens of users. Yet he continued working on it endlessly (while working as a part time security guard) and today it has near to more players than ever. And that's because of word of mouth. Which I suppose is something I'm also here partaking in. It's a great game!
Word of mouth is a great way to grow, but only once you already have users. Word of mouth requires people to already be your customer to bring in new customers, so the less business you have the less potentially useful it is.
If word of mouth is the only advertisement replacement allowed, it would force many companies to rely on fake sockpuppet customers to try to get started.
I don't propose any alternatives to advertisements. The simple fact of the matter is that as long as people also need to pay for mobile data or are subject to data caps that advertisements are a giant waste of bandwidth and at their very worst a distribution method for malware and other malfeasance on the web. It's a giant "pay for placement" that has no regulation because as long as the advertising companies are making money hand over fist they have no obligation to police the content being pushed through their networks because they get paid either way!
The age old use of advertisements that were generic and placed on TV channels were IMHO the "peak" of what advertising could be. All other forms of advertisements have proven to be predatory. Sites DEMAND you view their ads so that they can receive money to keep the site going.... That inevitably results in traffic to the site going elsewhere and the same result is that the site dies so I absolutely agree that if the site/product requires continuous profit through advertisements because otherwise it would not be able to continue operating then maybe that site/product does not need to continue existing.
In a civilised society, among other regulations, advertisements would be shown only to people that have pro-actively searched for a service or product.
I mean, we're basically heading towards a basic minimum income at that point, where you truncate the top 5% of the tax ladder and redistribute it to the bottom 25%. At that point, people could pay for services.
Dig a little into the idea that people can "pay" with their attention (and with their poor health, crappy buying decisions, personal freedoms) might fit the dictionary definition of sinister.
> truncate the top 5% of the tax ladder and redistribute it to the bottom 25%. At that point, people could pay for services.
why should those at the top 5% pay for more than basic subsistence to the 25% at the bottom? Welfare is for survival (and survival only), not for services that is not essential.
I suspect you have no idea what the “top 5%” looks like (in the US).
The bottom of that 5% looks like extremely upper middle class. Big houses, probably a vacation somewhere nice once a year, new cars fairly regularly, etc.
The middle most likely live in gated communities, with any kids going to private schools. They fly first/business class everywhere and can probably even afford to buy one of those tickets at a moments notice. Their closet is worth more than most people will earn in several years of working.
At the top end, these people have multiple tennis courts, pools, basketball courts, etc in their back yard. If there is something they want, they most likely have staff on hand to handle it. If they want to go somewhere, they hop in one of their private jets, helicopters, or chauffeured cars and go there. Oh, and they have a team of lawyers ensuring they never pay a dime in taxes.
Source: spent some time with each of these personas in my life.
This space is too small for a philosophical debate about the merits of UBI. Suffice it to say, when the top 5% has 1,000x more than they need for subsistence and the bottom 25% has never had access to more than 1x more than they need for subsistence, nothing will ever change. And more to the point, because having 10-20 times more than you need for subsistence has a demonstrable impact on your ability to generate more wealth 1) some will always get richer and pull away from the majority over time and 2) eventually the majority of people without subsistence amounts of wealth will rise up and kill you.
Maybe that wont happen in our current system for a generation or two. But then the question those 5% at the top have to ask themselves is whether they feel lucky to avoid an uprising against them in their lifetime.
Yeah, the trade-off of switching money for attention is not obviously beneficial to society. For example I can easily imagine that there's a significant mental health impact of being constantly advertised newer and better things.
> What would you recommend as an alternative to advertisements?
Not Having Enough Information is the least of anyone’s problems.
> I do think a lot of ads go too far, but I think some form of advertisement is necessary to get new awareness of new brands and products. Otherwise only the pre-established large brands and products will ever get sales, which would lead to a lack of competition and all the negatives that come with that.
This is a very post-hoc rationalization. Given that this isn’t how advertisement was invented and I have seen no evidence that this is how it works in practice, I don’t see a reason to accept it as a premise.
See political advertisement in America. That massively, massively favors the big players, not the little ones. Not only to boost them directly but also by actively smearing the other candidates, which moves the outsider candidates from the category of “never heard of” to “scumbag” in the minds of voters. (If you have no other information you have nothing else to go on.)
I would argue that larger brands have to have more ads to sway opinion because people are already familiar with their products. Coca-cola is one of the bigger advertisers, but everyone already knows what Coke tastes like and if they like it. Seeing dozens of Coke ads won't change someone's opinion much, compared with seeing a single ad for a new drink can take them from 100% unawareness of the product to knowing it exists.
Ads are not chiefly about awareness. They are about producing want, manufacturing desire. The idea that ads are about “hey we have a product, just thought we’d let you know my good sir” hasn’t been true for over a century.
You don’t advertise until you are confident that 90% of your potential market segment knows about you. You advertise continuously as long as the desire-making makes a profit according to whatever projections the marketing department makes.
You say that that Coca-cola is one of the biggest advertisers. At the same time everyone knows about them. Are they simply pouring money down the drain?
I'm not convinced. I've seen these ads work on myself- I see an Oreo ad and think, oh an Oreo does sound nice now actually. Or that something is popular and the best: why go to Bush Gardens when we all know that Disney is the Premiere theme park.
>Can I take this a step further and say that advertising is the true problem here?
no? promoting stuff with a plausible chance of adding value to someone's life is not bad imho
the problem is the stuff that kills value, like smoking and gambling. not advertising in general. so yes advertising (the many) bad stuffs is bad if that's what you mean by "most things"
Most ads are not for something that adds value to your life. When was the last time you personally saw an ad that's anything other than a trailer for a piece of media, and was excited and happy to see it.
Some argue it’s gotten worse due to recent tradeoffs between privacy and accuracy, such as the negative effect Apple’s ATT has purportedly had on Meta’s targeted ad efficacy.
> the problem is the stuff that kills value, like smoking and gambling
These are not the only things that kill value. Turn off adblock for a week and count the ads you see for quality products that you actually need, that you would not be worse-off for purchasing. Generally if something would actually add value to your life, nobody needs to trick you into buying it.
> stuff that kills value, like smoking and gambling.
but you decided (for others) that this stuff kills value. Not all people would agree with you. And while it's pretty clear that smoking is harmful, the smoker might want it as it relieves stress or for whatever reason, and they're willing to make the health trade-off. Ditto with gambling.
I really don't think this is true. The overwhelming majority of ads are just predatory and exploit all sort of psychological cues and tricks, like false association, just to sell more Product. Coke is one of the largest advertisers in the world, and a perfect example of this. They regularly imply that extremely healthy individuals, the sort that would never touch a Coke in real life, are avid Coke drinkers, implicitly suggesting that if you drink Coke - you can also be like them.
This clip [1] where Coke managed to get a couple of bottles placed in front of Ronaldo during a presser is awesome. A visibly agitated Ronaldo places them aside out of the frame, grabs a water, holds it up, says 'Agua!' and the carries on. But Coke is so dependent on these false associations, that that one little event saw them lose some $4 billion in market capitalization! Imagine if advertising wasn't a thing, and people only associated Coke with the sort of people who regularly drink Coke. The world be a much better, and healthier, place.
And Coke, as scummy as they are, are one of the less scummy advertisers in the advertising world - such as it is.
There could be a searchable directory of products and services tagged by topics. Interested parties could subscribe to topics, to be notified about new products or services.
so you would approve of advertising then, because a notification is an advert (albeit for said topic). But then who would decide which topic for which product/services?
The bottom line is that advertising comes about from a need - that need will emerge again, even if you tried to make it inefficient to fulfil that need.
No, I would not approve, because ads come through channels that are unrelated to me looking for product information. The distinction is pull vs. push, and at the place and time of my choosing, and being unrelated to any other media.
We also have a need for food and sleep, but that doesn’t mean we should accept being forced into eating or sleeping when visiting web pages or consuming media. The absurdity of that example should prove the point.
> It seems that society can't delineate between accepting a practice and promoting a practice.
There's no "society". Obviously people can tell the difference. You're the one saying the law should treat them differently - why? We have certain exceptions, but what's your case for a blanket rule, or a deny by default rule?
> Can I take this a step further and say that advertising is the true problem here? I tend to think most things should be legal, but advertising most things should be banned.
Yes.
Advertisement is in practice predatory. Predatory psychology. It’s not a symmetrical exchange of want-information vs. has-information. It preys on us.
And I don’t think lotteries are particularly bad. They sell daydreaming and one-in-ten-million chances. Not a bad thing for anyone in moderation. Not that last part: all other vice sellers (candy, other unhealthy food, tobacco, alcohol) have no better arguments for their existence.
> advertising most things should be banned.
> It seems that society can't delineate between accepting a practice and promoting a practice.
If it were up to you, what would you allow to be advertised? What’s your allow-list? And how would you define it in a way to allow the advertising of future products and services?
not OP: but my take is that this can be handled by stores promoting certain products. So if youre not in a store, youre not looking to buy and will see no ads.
> Can I take this a step further and say that advertising is the true problem here?
No, you cannot. You are meddling with the forces of nature with this sort of talk!
Without unregulated online advertising, there is no Facebook or Google or Twitter. Modern Silicon Valley could not exist without the fortunes made by showing people ads. Apple, Netflix, Amazon are all deep in it now as well.
>Without unregulated online advertising, there is no Facebook or Google or Twitter. Modern Silicon Valley could not exist without the fortunes made by showing people ads.
> Modern Silicon Valley could not exist without the fortunes made by showing people ads.
The Silicon Valley full of genocide cheerleaders? Sounds like you're threatening me with a good time.
Facebook is ghoulish, and has scammed and brainwashed a huge portion of the most vulnerable people. Google is evil. Twitter is the dumpster fire other dumpster fires are ashamed of.
People seem to be missing your Network reference. Regardless of how you intended it, yes, advertising is undoubtedly one of the greatest and least noticed evils of our age. There have been many warnings over the decades, but they don't get much airtime for some completely unsurprising reason.
> While you're right, the AI part is almost irrelevant. I think it's immoral that states can advertise their lotteries full-stop.
Well, it does highlight the fact that AI will make this sort of manipulation so much easier and faster.
Imagine a world where there the only vehicles are bicycles and people crash frequently. Now, the first car is introduced and someone crashes at 100MPH -- would you really say that the car is irrelevant to the situation because people have been crashing for decades?
Yes, you are right: the root problem is the way we view people's minds as a resource to be exploited through psychological manipulation, but that does not make irrelevant the very real new danger of AI.
I understand why you might say that - particularly at a societal level - but individual drivers:
* Must be licensed
* Must have a minimum level of liability insurance that covers other participants in an accident
* Can be fined, get higher insurance premiums, and lose their licenses for breaking driving laws
* Can be put in jail for felonies for sufficiently reckless driving that has the potential to endanger others
As I understand it, commercially licensed drivers usually have even higher requirements and lose their licenses more easily for driving infractions.
GenAI makes a lot of creative actions much easier, faster, automatable, and available to everyone, and "everyone" includes a lot of people with no foresight or discipline whatsoever.
None of this was true when cars were first introduced. Back in the early 20th century it was considered fairly normal that drivers would just sort of plow through pedestrians if they’re in the way. Big societal battle over who gets to be in streets – the people who have always been there, or the cars.
Eventually jaywalking was invented as a concept by the car lobby.
> And then the automobile happened. And then automobiles began killing thousands of children, every year.
> immoral that states can advertise their lotteries full-stop.
State lotteries are a compromise: They inhibit underground lotteries. The goal of the state lotteries is to divert people away from the danger of the underground while keeping itself from being overly attractive.
We have seen how well this argument holds up with sports betting, which went from something people do in Vegas on vacation to a significant portion of 20-30 year old males pay checks.
I don't know where you grew up, but in almost every college guy's social network (usually already consisting of sports guys) we all knew a guy and didn't just bet in Vegas on vacation.
It has always been a significant part of 20-30 yo male paychecks, now it's just acceptable to advertise on ESPN.
There's always been gambling, but there's a lot more of it now. Gross revenue has more than doubled since 2020. I think it's safe to say gambling's new-found ubiquity is a big part.
Again, this is referring to legal gambling, whilst I was referring to illegal sports betting, which we would have basically no good accurate way to devise the total number pre- or post- legalization.
As someone who was a 20-30 year old male for 10 years, it’s not something I would have ever considered, the effort, risk, and return doesn’t seem worth it vs. just buying VTI or SCHB and then letting them sit?
Are the majority of 20s males really gambling a significant percentage of post-tax on sports? Has this displaced social fantasy football type games or the basketball brackets?
Recently it took off (and I do mean recently: post covid).
You have ex-crypto/dropship bros who turned to sport betting to make 'millions'. basically they're rolled by the apps for half their bets, their winnings are the payment, and their balance is always positive since their EV is 1.60 (when normal customers EV is .80).
That, plus advertising on college sports (including 'free bets'), it is really a big thing, even for 30yo+ people to be fair.
> This is the same argument for casinos but states don’t run those.
In most (or at least a lot of) countries states regulate casinos _so much_ than it becomes almost a technicality that casinos are privately owned.
Also with lotteries you have the scale factor -- in order to have an attractive lottery, you need a ton of ticket buyers, so if you have a national level lottery you can definitely make neighborhood lotteries way less appealing.
Blackjack and poker tables don't scale the same way.
Um, no the goal of state lotteries is money for the state to spend on things that voters would not want to pay taxes for.
Most of the lotteries were started with the promise that they would "fund education" but in reality dollars are fungible. To the extent that they even try to maintain the facade, that's all that it is.
I go further than this. It's immoral to weaponize someone's evolved dopamine system against their own long-term interests. Whether that's fast food, alcohol, social media, loot boxes or gambling. It's fine to have products in these loose categories, but we've clearly gone overboard when people are fat and depressed because their evolved circuits have been so thoroughly hacked like moths to a flame. At some point we're no longer in the paradigm of voluntary consumer choice.
>when people are fat and depressed because their evolved circuits have been so thoroughly hacked like moths to a flame
People in America are fat and depressed because of a culture of looking to blame biology for poor self control rather than take responsibility for their own behaviour. Advertising is just as much of a thing in East Asia but it doesn't have the obesity problem that the west has, because there it's regarded as a personal failing. Advertisement was just as much a thing in America 50 years ago as it is now, but people weren't fat and depressed then, because people were told to take responsibility for their diet and their state of mind.
Willpower is like a muscle, it's been proven to strengthen with use. Letting people get away with blaming advertisement, biology or whatever for their failures prevents them ever needing to develop their willpower.
Willpower is a bucket that fill during your sleep, and with time you spend with people you enjoy, and is emptying each awake second, and faster if you have to use it to defend yourself against distractions. You can make the bucket bigger (especially as a child) so you can start the day with more than your peers, but ultimately, resisting too much can be exhausting.
I say that as an ex-obese who can now say he's 'fit'. I wouldn't pose on magazine or on Venice beach, but I'll be quite happy to go shirtless this summer for the first time in more than a decade. I'm constantly hungry, and will be all my life. I fucked up my hormones by eating too much. I don't have any food I can just eat at home. I have to prepare it before. I eat out a lot, and always place myself away from food at gatherings. Each decision I make to avoid eating, I transformed into rituals, so they aren't decisions anymore.
If I had to also stop smoking/drinking/gambling, I wouldn't have succeeded. If I had half my salary, I wouldn't have succeeded.
Obesity in men is quite hard to see/feel at first, especially since I bummed my ankles before putting on weight, so most physical diminutions I put on my accident and not my weight. When my fat showed on my chin, I was already at 34 BMI (I'm at 24 now, I've been around 26-28 for 4 years).
I also was fit as a teen, my upbringing gave me a very good support system and 'community', so my willpower is easy to refill: a good demonstration, a 1st of may, a short night/weekend in a 'ZAD' (https://en.m.wikipedia.org/wiki/Zone_to_Defend) and I'm good to go do any soul-crushing task for hours.
You should read Robert Sapolsky's new book on free will. The more you learn about biology the more you realize that willpower and free choice are a convenient illusion that appeals to human narcissism but doesn't actually have explanatory power.
The introduction of fast food caused America to get fat. The introduction of Ozempic is helping America reverse that.
Neither of these changes are because the average American gained or lost willpower.
These changes are because of biological cause and effect. Fast food and Ozempic are products that hack ancient biological circuits that drive behavior. It's empirically falsifiable causality, not vague woo.
That's forcing things into an artificial binary when it's more appropriate to think of it as a spectrum, with unambiguously aligned products (e.g whole foods that improve user health) on the one hand and unambiguously exploitative products (e.g krokodil) on the other.
Worse still, because it’s a state-run lottery, it’s not subject to truth in advertising laws so they can advertise the jackpot that you would receive over 30 years as if it were the lump sum payment that you’d receive for winning.
Its one thing for a human to pay another human to call up leads or whatever, and an entirely other thing to create a factory to do the same job and hit thousands/millions at a lower cost than someone would pay the single human. The scale difference is significant, albeit I also agree that neither is moral.
I don't have a problem with state lotteries because it's essentially charity towards public works. You could argue that the gambling aspect makes it addictive, but it's basically entertainment.
A massively profitable charity towards public works funded exclusively by poor and lower-middle working people who are addicted to waiting for their numbers each week
Well, there is a fine line betweeen entertainment and a drug. That is precisely what makes entertainment so protected by modern governments and corporations: they know where that line is, and how to cross it.
I really want to know who green lit this after seeing all of the controversy with Google and was like, yeah we won't have a problem like that billion dollar company.
There are already a lot of sleazy ads that I see for gambling and the lottery. But this is extremely predatory since it personifies the possible winning.
Gambling is predatory and immoral(*), regardless of what technology is used to advertise it. I know that makes me sound like a tea totaler who was trapped in cryogenic sleep for the last century, but I really can't think of a good reason to allow any kind of gambling at scale in our society. It's filled with negatives.
(*) of course, this is my opinion, you might not have the same morals as I do.
But unfortunately, it will have to stay legal, otherwise people would still seek it through illegal means, possibly exposing themselves to much worse dangers.
It's the same problem with drugs (including Alcohol) and tobacco, they need to be legal in order to he regulated.
I disagree, because industrialized gambling is very different from illegal gambling in terms of availability. I see the prevalence of slots in my neighborhood after they legalized gambling wholesale in my state and I don't think it's been made better by "regulation" except through some additional taxes, it's just made it easier to access.
The same could be said about prohibition for alcohol and tobacco. Raising the age for legal tobacco consumption did reduce consumption, and prohibition did lower consumption permanently for alcohol.
> manipulate people into gambling is predatory and immoral
Exactly. Especially at the government level. Fine, some states allow casinos to operate, but it's pretty evil when they run lotteries themselves. The ridiculous reason "we're using these funds this to help educate people about math" is just that, ridiculous.
Why are you using sentences other than the template that I showed?
The outcome for "It is very immoral to stab someone with a knife." and "It is very immoral to kill a billion people with a bomb." is very different in scale and kind. What I highlighted is that the means by which a common objective is reached is irrelevant when the objective is inherently immoral.
I hope this helps you understand how language works.
> What I highlighted is that the means by which a common objective is reached is irrelevant when the objective is inherently immoral.
I disagree completely if after the initial objective is reached, the means then propagates the reaching of further objectives that are also immoral. And the more advanced the technology, the more immoral means will be reached.
Aren't they equivalent in terms of morality? If they aren't, then you've basically answered the Trolley Problem by implying that the outcome with more deaths is morally worse.
Magnitude is a question of pragmatism. I don't think pragmatism is very popular among people when looking at things like the war on drugs (despite so many people being against it, nothing has changed).
Well, in general, there is a very strong mythology with technophiles in general and here in particular. It is the mythology that we should further technological progress because, well, "because!"
Unfortunately, this mythology is becoming exceptionally maladaptive in modern society but we can't see it because it is so deeply ingrained.
It is so deeply ingrained because there are many ways in which technology has made life better, both for the technophile and the average person.
Admittedly, there are many harms as well. I can't say whether one outweighs the other, and perhaps it is because the technophile often avoids many of the involved harms that it becomes too easy for many to overlook them or play down the impact.
It is a difficult question indeed, and it depends also on the following: are we measuring the developments of today by how they will affect the world in 10 years or 100? Most people seem to agree that the latter is the more logical measure, yet we operate as if we are using the former.
My biggest complaint is the tendency to conflate change with progress. Perhaps I'm something of a luddite, but in the last ten to fifteen years or so, I have seen plenty of change, but whether it has been beneficial is often hard to suss out.
It seems rather uncontroversial now that social media is detrimental to teen mental health (and I really don't think it's limited to teens). From the start I saw reasoned arguments that this sort of technology was deleterious for a few reasons, but the counterargument I usually saw here was, "yeah, but look at how much Facebook pays developers... and they contribute to open source". Two of the most prestigious of companies here, the F and the G in FAANG are purely advertising companies (and the others have plenty of it as well); what are they doing but trying to manipulate you into buying things you probably don't need? Obviously the pay is attractive, but is that how anyone but a psychopath wants to spend their career?
Part of me agrees with you, but another part of me hesitates to dehumanize anyone, as that way lies madness.
I'm reminded of the film They Live (1988). The protagonist, Nada, seeks a job at a construction site, as that is his trade. Due to the union, the foreman can only hire Nada as a day laborer, as Nada is not in the union. His way of life is gate-kept by powerful interests controlling the hiring process, and access to capital is constrained by these interests. By the time Nada obtains the sunglasses, he has already lost that job and is well down the path of becoming radicalized. Ironically perhaps, he first heard about the group producing the sunglasses when they hijacked the TV signals to essentially advertise their ideology.
Once Nada acquires the glasses, he could be said to be enlightened, but I also view it as a sort of cautionary tale, considering how the film ends for Nada. Along the way, he sees through all of the ads, and in fact much of the surrounding content itself rings hollow for him. His actions are reminiscent of Ted Kaczynski in this context. Why didn’t he instead seek to join the union? Were the glasses actually what they purported to be, or were they a kind of rose-colored glasses that were designed to manipulate the wearer just like the advertisements did, but while making them believe they were seeing things more truly? Nada thought he was acting under his own agency before putting on the glasses, and yet he so readily trusted that he was still after putting them on, while adopting an anti-capitalist, anti-authoritarian ideology as part of an underground fringe group.
Perhaps Nada was just tired of being part of the system, but by rage quitting as he did, we as the viewer are led to believe he goes out in a blaze of glory, but I view his exit to stage left thusly as “full of sound and fury, signifying nothing,” to quote the Bard of Avon. There is no exit from capitalism that way, if there is any alternative at all if capitalist realism holds true, and Nada’s final line is perhaps an admission that he was himself used all along, and he was simply tired of scraping by and being in scrapes. He had gone down the path he was on as long as he was willing to go, and there was no way back, as he was already too far gone to come back.
I’m reminded of the is/ought distinction, and I’d make a similar distinction between advertisements/content. Content is itself a similar kind of appeal to believe or think about concepts and ideals and ideas that come from outside ourselves. Due to the power of belief, capital seeks to gate-keep access to content, and advertising acts as a sort of escape hatch, blow-off valve, or side door through which access to those seeking content are able to be reached, accessible for a fee. It’s not exactly ideal, and not without its own hidden costs to the buyer or seller.
Lack of capital is the conflict that leads to the crisis in Nada’s life, but I wish that his story had a happy ending. I feel that Nada was perhaps less free after putting on the sunglasses than he was before, as strange as that may sound. He fell for a classic blunder, believing that he actually was immune to advertising, rather than being sold an even bigger lie: that he could (only?) effect change through violence, that his life was no longer worth living.
Every time an AI model is released, people on this site show up to complain about how it has been censored and bent to some puritan agenda.
‘Why,’ they ask, ‘would I want a politically correct AI that’s been nerfed so it can’t offend anyone?’
Well, this is what happens if you build real applications using insufficiently censored AI.
This is why OpenAI and Google and stability always explain how they have restricted training content and aligned the output to avoid producing controversial output.
Because they want people to build applications using this technology and those people do not want this headline to appear about them.
Each product has their own value proposition. I don't think anyone opposes that each business has control of their own AI pipeline, the crux of the problem is at what stage of the pipeline this censorship is injected, specially in open models.
Of course businesses have their right to release AIs with censorship, and we as costumers have a right to challenge that and ask for a product that is actually useful to us.
Saying it again: If a business releases a model, we have a right to demand that it is useful for us because we're the business' costumers.
If those demands aren't met by a business, another business should be able meet them in open competition. And this only happens when open uncensored models are not banned by law.
> Saying it again: If a business releases a model, we have a right to demand that it is useful for us because we're the business' costumers
Open models == free == no customers. For open foundational models, yours sounds like a demand that someone else invests millions in training a foundation model of your liking, but you're not motivated enough to fine-tune your copy of the model for hundreds or thousand of dollars?
If you're not paying for the "censored" proprietary models, you are not a customer either. If you are a paying customer of "censored" models, and would like them to change, remember businesses have the right to choose who they want as customers - and they can decide they don't want the "uncensored" market segment. I'd love a $25,000 electric Ferrari too, but Ferrari is not about that.
You're right, businesses can choose their market segment, there's nothing wrong with that. They just have to bear with the loss of costumers when the segment they were aiming for makes demands they're not prepared to fulfill.
In your example, it doesn't have to be Ferrari specifically. Someone could make an equivalent product at a lower price even.
But I have something else to say about your comment.
It's that nothing is entirely free. Google lets us use their search service, but we're paying with something else than money.
When a business "gives away" an open model "for free" it's only on their self-interest, and this does not free them of the expectations/responsibilities with their costumers. No matter what form that self-interest takes, we're still their costumers even if it's not money.
So if they release an open model subpar with what we expect, they're gonna end up losing.
> If a business releases a model, we have a right to demand that it is useful for us because we're the business' costumers.
What makes you their customer? Are you offering them money for access to the uncensored model you say you want?
Do you have some business that you’re going to be able to build that tests on access to an uncensored model, and their failure to offer that to you is preventing them from getting paid by you to provide the underlying technology for your venture?
Because on the other hand there is a state lottery over here who absolutely is willing to pay someone money for a safe, censored model. They are an actual paying customer with a real use case and cash to back it up.
Of course, it turns out the AI they ended up using didn’t (if this story holds up) actually meet their expectations with respect to safety and alignment - but let’s be clear: they wanted a safe, censored model.
I find it hard to believe that the market for uncensored models - LLMs that occasionally output 4chan style racist tirades, diffusion models that occasionally generate pornography - is actually worth addressing, compared to offering models which don’t do those things.
> I find it hard to believe that the market for uncensored models - LLMs that occasionally output 4chan style racist tirades, diffusion models that occasionally generate pornography - is actually worth addressing, compared to offering models which don’t do those things.
Well I think that's for the market to decide, yes? If state intervention is out of the question, obviously.
> but let’s be clear: they wanted a safe, censored model.
This confuses at least two things. Censorship is about intent and enforcing a world view. If I want a model to generate something for me and it doesn't because of censorship, that's one problem.
If I want a model to generate a fun pic of me at the beach and instead it makes a nude photo, that's completely different.
> If I want a model to generate a fun pic of me at the beach and instead it makes a nude photo, that's completely different
What if someone else's idea of a fun pic at the beach is a nude photo? Is it only censorship when the worldview being enforced is not aligned with yours? A truly uncensored model should accommodate everyone, but society will (and should) absolutely burn down any AI service that accidentally generates nudes from a beach photo of a family that has minors. "Intent" is a human attribute, AI is probabilistic.
This abundance of caution by the companies avoiding nudity is unfortunately read as promoting a worldview or pushing an agenda, while it's just risk mitigation.
Way to completely misunderstand the issue at hand.
The issue isn’t “we’re mad at them for subscribing to a New Puritan agenda (although there is one playing out); the issue is that right now there is no way to censor the model for “undesirable” features without lobotomizing the acceptable content as well.
It’s not just removing nipples from training. We have evidence that the “safer” the models get, the worse they get.
The more heavy handed the weights are by manual influence, the less they are producing valuable output.
The cost of performance for everyone is directly coming at the cost of the perpetually-offended, and I suggest why bother because those people are never satiated. When you make your identity about being offended, there will always be something that is offensive.
But… The spur here you are cha going the topic to is a side effect of government employees not knowing that these models aren’t yet so lobotomized that it is still possible to create “problematic” imagery.
I’m just saying that I don’t think the lottery player who got sent fake porn of herself is necessarily a member of the ‘perpetually offended’, to whom you’re ascribing the demand that model outputs be restricted.
I think she’s part of the class of ‘actually offended’. This image wasn’t ‘problematic’, it was potentially actionable.
And when you are a business, actually offending your customers is normally not a good idea.
Striving to create software systems that avoid actually offending people is a good idea, even if it comes at the expense of some other benefits. Whatever ‘performance’ we apparently give up by training systems not to generate nude photographs has to be weighed up against the risk that it might do so at an inopportune moment.
Regardless of their intentions, current state of the art automatic censorship have dangerously bad false positive and false negative rates. So, they inadvertently and inevitably suppress the truth and broadcast misinformation. Their performance on images is similarly bad.
Of course, even if they did work, centralizing control of them is a bad idea. The end user should have the freedom to control such things. (In the same way you can type swear words into a word processor).
In practice, these AI models are centralized services. Few people are running them locally on their machine with knowledge of the training data, weights, or system prompts. There isn't a direct line between "user intent" and "system output," unlike the local word processor in your example.
Someone has to make value judgments on the underlying data and algorithms, if only to build the system in the first place. If users can't (or don't care to) run these models locally, who else will decide other than the service operator?
‘The censorship work done in AI alignment is insufficiently accurate to rely on for production use’ is a good and valid position
‘Therefore AI vendors should release uncensored models and let everyone else figure it out’ is not a logical consequent from it.
‘Therefore don’t use current AI tech to build apps that photomanipulate user submitted content and echo the results back to them with your logo on it’ would be a more logical conclusion to draw.
Thankfully that is how a lot of AI is working right now, open source models for both LLM and Stable Diffusion let people have access to uncensored reasonably decent quality AI tools.
In other categories of AI generated content though, such as speech and music, the open source alternatives are either lacking or completely absent. And with the company behind Stable Diffusion being on the verge of financial collapse, we may see Stable Diffusion fall far behind it's closed source counterparts.
FWIW, in trying to explain the various ways machine learning can go awry to our business users, I recently discovered the "AI Incident Database" (https://incidentdatabase.ai/).
It's full of interesting gaffes AIs make and while it looks like this one hasn't been submitted yet, I assume it'll be there soon.
If you're interested at all in tracking these, it's a great resource.
> “I also think whoever was responsible for it should be fired,” she added.
> “We were made aware that a single user of the AI platform was purportedly provided an image that did not adhere to the built-in parameters set by the developers. Prior to launch, we agreed to a comprehensive set of rules to govern image creation, including that people in images be fully clothed. We have not seen the image that the AI provided to this user, but out of an abundance of caution we took the website down,” the spokesperson confirmed to the Jason Rantz Show on KTTH.
Here's a tangential take to all this, but its the overwhelming thing I feel reading this: Something happened to our society which resulted in effectively no one being able to be held accountable for anything. Its always "well, we wrote this policy and we were well within the policy! the thing just went oopsie daisy no one could have predicted that!"
They were creating and distributing porn of non-consenting individuals. Everyone involved with this should be fired, and maybe actually it should go further than that.
I feel this tremendously in the day-to-day of the tech world. I will make some system change which brings down production. I hop on a postmortem with my boss, and the first thing she says: "We have a blameless engineering culture. Its not your fault, and I don't hold you responsible." I disagree, entirely: it is my fault. I need to own it, and I don't understand how people function tip-toeing around accepting responsibility for something that is obviously their fault. How do you, the individual, improve?
And the line for your responsibility is where exactly? You want to be personally responsible for every single dependency you pull in, every piece of hardware between you and the end user? This is a farcial suggestion.
> They were creating and distributing porn
Unless they specifically trained their own model to do this - which they obviously didn't - this is just a bug. If we all got fired any time a bug happened, society would grind to a standstill. Shit happens, move on. What's important are intentions and taking reasonable precautions.
If you were messing around with a gun that had the safety on, and the gun malfunctioned and shot a child while you were waving it around, would it be a valid defense to say that you took all the necessary precautions (since the safety was on, after all) and it's just a silly oopsie that the gun did that?
You should have a responsibility to put the necessary safeguards on a potentially dangerous tool that you're using and to do your due-diligence if you're using that tool. It would have been impossible to accidentally produce porn of a non-consenting individual if they had not been using an AI image manipulation pipeline at all. If you do choose to use an AI image manipulation pipeline, you should take on the responsibility to make sure that it's not going to do illegal things you didn't tell it to do. If there's a risk of causing harm with it, just don't use it! You don't need to be producing cute images of people in Vegas or whatever, and if doing so necessarily implies a risk of accidentally creating porn of them instead, you could (and in fact, have a responsibility to) just choose to do neither.
A gun malfunction has a potentially lethal consequence. This bug caused a person to see a fake photo of herself with fake boobs out. Perhaps it's ok that the safety standards for the latter are lighter, no?
You also didn't address the main point of the earlier comment. Is every programmer responsible - even criminally, some suggested - for potential bugs or vulnerabilities in one of their products?
> You want to be personally responsible for every single dependency you pull in
Yes.
> every piece of hardware between you and the end user
Nah. See how you said "where do you draw the line" and then proceed to just ignore drawing the line? I'm ok with drawing a line. There doesn't have to be logic or reason behind it. That's where I draw it; the repositories and websites and apps that I distribute.
> Unless they specifically trained their own model to do this - which they obviously didn't - this is just a bug.
Go try to make Midjourney generate a nude image of someone. Its extremely difficult. You'd have to be extremely intentional and, frankly, malicious. I've never seen it happen.
The way this article is written, it sounds like the user just uploaded a photo and it happened. So, what kind of prompting are they using which led to this? Also: They said they don't have a copy of the image, but that's extremely hard to believe just assuming how systems like this would be designed; the image has to be served from somewhere, are they seriously purging the images from e.g. their storage buckets that quickly? Possible, but hard to believe; of course, they'd say "oh we don't have a copy" if they investigated, found it doing this, and just wanted the story to die.
Or, we could do as you want, and just sweep this under the rug. Nothing to learn. No one to blame. Just a quirky little thing that happened! So quirky! A state gambling office manufacturing pornography of random people to sell lottery tickets! Quirky and fun!
> They said they don't have a copy of the image, but that's extremely hard to believe just assuming how systems like this would be designed; the image has to be served from somewhere, are they seriously purging the images from e.g. their storage buckets that quickly? Possible, but hard to believe; of course, they'd say "oh we don't have a copy" if they investigated, found it doing this, and just wanted the story to die.
I don't find it very hard to believe.
1. If the image can be generated in a few seconds, the easiest way to implement this is to accept the file upload in a request, do the AI stuff, and return a new image in the response. In this case, it would be extra (and unnecessary) work to save it in a storage bucket. Both the uploaded and generated images only exist in memory for the lifecycle of the http request, and then get deallocated.
2. If the image _is_ being saved in a storage bucket (or Redis, or whatever), the bucket might have a lifecycle policy on it that automatically expires images after a set time. This is a pretty common cost-saving measure, so it's not too unlikely.
Well it looks like they've drawn a line too and found themselves not responsible, so they're already doing what you're advocating it seems. Nothing to learn here.
A decision maker figured they could cut costs by using a DIY model instead of paying for mid journey API calls. They're the ones who are responsible.
But I wouldn't trust any diffusion image gen here. They're all liable to create disturbing and inappropriate imagery, and there's no way to prevent it.
Did they actually distribute the image to anyone else other than the woman? The developers say they hadn't seen the image nor was the journalist able to find it.
Sure, the developers said they hadn't seen the image and that they'd never seen the image generator do this and etc etc. That's regardless of the truth; there's no reality where they'd say anything else, and their word cannot be trusted. That's the point of an investigation. They have every incentive to make this disappear as quickly as possible.
And, to be fair, its doubtful it would get very far; there's no evidence the aggrieved party wants to sue, she doesn't have the photo it generated, etc. I'm not standing on a soapbox arguing that the secret service should get involved or something. I'm just asserting that, it seems to me, we the people should care a bit more about this than we seem to be. Other people saying "its just a bug!" are incomprehensible to me; there's a spectrum between "your signin button is using the wrong hex code" and "your chemotherapy dispenser is outputting the wrong volume of medicine", and it is not obvious to me that this bug isn't closer to the right side of that spectrum than left.
An engineer in the room might even know that these generative systems are capable of hallucinogenic, and sometimes even explicit, output; but they don't have the political capital to stop decision makers from deploying these systems, or underinvesting in safeguards, etc. That's why we need pushback, ideally lawsuits but at least public outcry; so decision makers treat the decision to use generative AI with the care and consideration it deserves.
That is a good point, I hadn't thought much about that. AI is an excellent tool for shifting responsibility to big tech, who in turn can easily weather the storm with their world-class legal teams.
I find it quite funny, especially if you realize that something like 90% of the Stable Diffusion model fine-tunes out there are actually made for generating porn or images of females. Go to the website that has the most image generation models to verify this for yourself: https://civitai.com
WARNING don't visit this site on your work computer.
But point being, it is actually kind of hard to find a Stable Diffusion model variant that is fine-tuned to avoid fairly broken generation that comes from the base models but doesn't have female objectification built into its fine-tuning dataset. Especially if you go searching for "Stable Diffusion models" and happen upon that site, since it does have the most models.
Don't know what part of the shadowban Matrix you're looking at, yet I did not see a single nipple in 10 pages of image and model scrolling with Woman selected as the highlight choice. Vaguely a let down after that much NSFW warnings. (Obviously, many that are meant to create "porn" images. Although even those looked mostly like upskirt anime generators).
Notably, a LOT of beefcake and partially shirtless male imagery on normal search. Also, quite a bit of "psst, this is really furry imagery"
What's even funnier is if you work professionally with diffusion models, you have to go to civitai.com as a work requirement!
I've had the talk with at least 5 significantly older coworkers who didn't know what "booru" websites are or why it is that civitai is so full of anime. Imagine trying to explain what the hell "Pony Diffusion" is and why everyone is using it suddenly.
How often do AI sites show the example of an astronaut riding a horse on Mars?
Changed the horse to a duck and never got a satisfying result.
AI doesn't understand, we learn through trial and error how to get the result we want but it seems the way to the result is always different.
That's like programming but the keyword change in every function.
> Changed the horse to a duck and never got a satisfying result
I also found this curious. Stable Diffusion is great at creating this one image but then for some mysterious reason sucks at almost every subsequent request using default/recommended settings. Sometimes even a request for another astronaut riding a horse produces garbage.
"’Our tax dollars are paying for that!’... Megan explained"
Yep, about $.01 to generate that image distributed between 7 million tax payers in that state. I love that people are so focused on saving pennies that they don't comprehend the long term issues we are gonna face :)
While I do understand how frustrating it is, about any other argument sounds like it would be better than the monetary one.
The phrase is more common in conservative circles than liberal ones. Since the site is explicitly a conservative media outlet, it's not surprising that they used this quote.
I don't think most people understand how millage rates work. Complaining about how expensive some projects are, but then it's less than $5/year if you actually calculate the millage.
All this GenAI to basically get people to spend more time on their phones, either looking at ads or gambling. At the same time devaluing genuine artistic achievements for low effort simulations of such.
What a depressing technology we are all working on.
The saddest part of this is that the everyone involved from the reporter, to the lottery staff, to the company that provided the service, all appear to think that the prompts they provided to the AI are somehow bulletproof and that their human understanding of the parameters matches what the software will do: “we had the developers check all the parameters for the platform and were comfortable with the settings, but after further internal discussions today, we chose to take down the site out of an abundance of caution.”
I know eventually the reality of these tools’ flaws will seep in but it’s always somewhat surprising to me how certain most people (including tech folks in the field who ought to know better) are that the AI actually “understands” the prompt in the way that a human would.
Turn on the news, or check social media. Every AI example shown is of something remarkable. That is very much the vision that AI companies are selling.
It's easier from a privacy perspective to not store and retain the submitted and rendered photos. One less thing to worry about regarding the privacy agreement.
This is a state gov. The blame is shifted 100% to the contractors and consultants who developed the site
Number 1, that’s because state governments aren’t paying salaries for full time AI developers. WA has a huge software industry that pays competitive salaries.
Number 2, state positions are the last bastions of pensions in our workforce and nobody is risking their nest egg for a decision that they can pawn off on a temporary contract worker.
I started my career at a place like this. All decisions are made by a committee and the actual work is outsourced. Nobody is going down for this.
The pictures look kinda nice actually! Pity it turned in the another AI fiasco. Reminds me of the Microsoft chatbox Tay. Another one of such well intentions but ultimately doomed AI projects.
It has nothing to do with this story or its credibility, but I was surprised that https://mediabiasfactcheck.com/mynorthwest/ indicating this site is owned by the LDS Church.
The website and radio stations are owned by Bonneville International, a publishing company headquartered in Salt Lake City, Utah. It is a subsidiary of Deseret Management Corporation, a holding company owned by the Corporation of the President of The Church of Jesus Christ of Latter-day Saints.
> We were made aware that a single user of the AI platform was purportedly provided an image that did not adhere to the built-in parameters set by the developers.
I laughed at this. What a non-apology with no accountability.
That's the quote that stuck out to me, too. It's like they're insinuating that the person is lying (while elsewhere admitting they don't have evidence either way). I mean, I don't want to backseat-quarterback here, but would it have been so hard to say something like, "We have decided to take our website offline while we investigate a report that our website generated an inappropriate image of a user"?
When it comes to AI image generators and questionable content, I've wondered if AI companies can address the problem by using a pair of AIs. Use one AI to generate the image, then feed that image to another AI with a list of questions, such as "Does this image show nudity?", and "Does this image depict violence?" Just a list of everything the AI provider wants to omit from its output. If the answer to any is Yes, discard the image and try again (or just refuse to furnish any image).
“Megan feared that the image could potentially get out and that it would happen to another website visitor. She told a friend, who happens to know someone at Washington’s Lottery, what happened.”
AI might erode some of the gains we got from camera techonology because the better it gets at manipulating images, the less humans trust digital images as evidence
I think the vindictiveness is all this is about. She's looking to win a lawsuit.
I don't agree with your "rightfully so" perspective. A woman in the privacy of her own home put a picture of her face into a prompt and it spit out a picture of someone that maybe looks a lot like her, topless. Big deal. She either has to be feigning outrage or she's never seen herself naked in the mirror.
Personally I’m of the opinion that the government shouldn’t be generating nude images of anyone. It doesn’t take real outrage or faux-outrage for me to hold that opinion either.
If we’re arguing pointless things… it also wasn’t real.
How about the meaningful thing, that why is tax payer money being used to create a generative AI site at all? This should be a non-problem because this is in no way the role of government.
Sure, but handing out crudely photoshopped nude photos of strangers out to them is not going to go over well, especially if you're wearing a uniform that says "Washington State Government Employee" on it
The psychological effects of "Imagine how your life could be.." are manipulative and disgusting. It's playing to baser instincts and if they were run by anyone other than the State, they'd probably be illegal.
"As soon as we did, we had the developers check all the parameters for the platform and were comfortable with the settings, but after further internal discussions today, we chose to take down the site out of an abundance of caution, as we don’t want something like this purported event to happen again."
It's weasel worded, which makes it sound like they're not 100% certain. It should be easy enough to check the logs, but I imagine there's somebody out there right now going "Why the hell didn't I log every picture we generate?"
Logging every image generated, using personally identifying photos, adds a big risk with almost zero upside.
The reason not to store the images is right here in front of us.
“Oops we may have generated topless photos of women who submitted their photos, but I can unequivocally state that nobody aside from that person saw the image, if it even existed”
Another point to not storing it: Storing all the images your users generate could require a non-trivial amount of disk space. And considering this is government, they're probably using a server farm and bare metal, and not an infinitely scalable file storage
If you stored a 50kB avif (which would be way higher quality then you need for this), you could do 20 million images / TB. Bare metal can fit almost 2 PB of raw storage in 1U now. "Infinitely scalable" is not really a thing anyone needs to worry about.
Do you believe that “the parameters” will be followed by AI image generation? If they think the parameters are somehow bulletproof then they don’t understand what tool they are using.
There's almost certainly porn and nipples in the training set. It's pretty hard to eliminate 100% of NSFW images. That means there is always the possibility of something like this happening.
Bodies are bodies, nudity is no big deal. But someone should have the ability to decide how their body is depicted. Someone should be able to say, "I'm not comfortable baring my breasts." And we should be ok with that.
The person consented to put their face on a scene of some sort. Given the majority of people on a beach are topless selecting what would life be like if I could take a holiday" doesn't sound shocking that it shows you.
I’d love to see your numbers for the “majority of people on a beach” being topless, because outside of places in Europe I don’t think it’s going to shake out quite as overwhelmingly as you seem to think. And since we’re talking about someone from Washington state in the first place, it’s certainly not the norm there.
Toplessness, interestingly, is not illegal in Washington. And if the parent poster is including male toplessness, it's possible that some breaches are at times majority topless.
The rest of their argument is rubbish, of course, but it's kinda funny that they right in that "broken clock" sort of way a little bit.
"I don't see the problem, you asked for a quick cartoon sketch of yourself, and I gave you one back with the big bare breasts that I imagine you have. Why are you mad? What prudery! I am an artist!"
You've been posting so many flamewar comments and breaking the site guidelines so frequently that I've re-banned your account. Not cool.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
>all depending on the type of news/propaganda utilized
No it's depending on the intention of the person wearing those same breasts. This is very clearly non-consentual presentation of an individual in a sexy pose. There's an argument to be had about female nudity, but this is separate from getting visually stripped by a computer without your consent.
The point here isn't anything to do with whether or not the image is softcore porn or what that even entails, it's that the image generator, being vastly an inscrutable pile of data, wound up generating something that was contextually inappropriate. This is a known failure mode of ML models in general, and it is very often extremely funny, though in this case it's particularly not, given the unsuspecting subject.
There will always be things that are considered inappropriate in particular contexts.
No, "contextually inappropriate" has a specific meaning and that's not what it is. How do we know what is contextually inappropriate? Well generally, when you violate the expectations of what is contextually inappropriate, people get angry and it winds up in the news.
But even that aside, I hope you realize how absolutely insane this line of thinking truly is. It doesn't matter if the user is "delighted", if it were to generate virtual child pornography I'd hope you would consider this to be inappropriate for the lottery website.
In this case the context we are talking about includes not just one user, but rather the general target demographic as a whole.
But let's be honest, semantics debates are a waste of time. The ultimate source of truth for "appropriateness", a wishy-washy concept to begin with, is the public court of opinion, for which the opinion is pretty obvious based on:
1. This news story existing
2. The developers promptly pulling the app even just out of caution
3. The top responses either agreeing or not disagreeing with the inappropriateness
And yes, just to preempt it, of course given enough people and time you'll find someone that will take issue with anything, but this isn't a "gray area" case, this is a dark shade of black area case.
Was this an appropriate behavior for the application? No. It's just that simple.
> I rather think that existence was due to a source and a journo - not any public court.
It's pretty hard for me to conceptualize that you are still trying to argue "actually, maybe it's appropriate for the silly lottery marketing website to output unsolicited porn without warning because one user might enjoy that" by literally nitpicking semantics in the idea that the average person is not going to find this appropriate.
A journalist and a source can indeed make for some bunk stories. It's still a point for this being a legitimate concern. Another obvious point towards that is the fact that it made it to the HN frontpage and now we're discussing it.
> Promptly? "Washington’s Lottery waited until after my inquiry before turning the site offline".
March 30th: the incident in question
April 2nd: gone
Two business days is quite prompt to decide to entirely shut something down over a single incident that isn't even confirmed to have happened actually, yeah.
If I upload my kids picture to a website that reimagines them as an elf, I give consent to that. I clearly do not give consent to creating an image of my kid dressed up in a nazi uniform saluting Hitler.
Giving consent to one thing doesn’t imply that every other possible use of the technology is also consented to.
Lotta people do and have been taking issue with gambling and lottery advertisements since before AI got involved. I'd imagine that group is going to continue beating that drum regardless of the technology used to sell gambling and don't feel the need to ask.
I think if you thought the post was worth making in the first place it's a matter of personal integrity to keep it up even in the face of pushback, downvotes, or flagging.
And brigading here is also a thing, and comes at punishments of less karma and rate-limiting.
Most people want the "expected talking points", and especially so here. If you ask the bigger questions, and people collectively lose their shit. I removed the text from other posts because people have comprehension problems, and resort to personal attacks.
There's a whole slew of questions here, to of which its just "hate ai" is the expected response.
1. Why the double standard of women's (not men's) nipples equated to porn? [sex equality concerns]
2. Is this even porn? Even SCOTUS has the position of "I'll know it when I see it". Contrast this with non-sexual nudity, which too is a thing.
> Gotta love how women's breasts are simultaneously "softcore porn" and "women's rights", all depending on the type of news/propaganda utilized.
Very strange. It's almost as if the settings in which humans operate require "nuance" and "complexity" and "context" to understand and can't be navigated with a flowchart.
Are you of an age to remember a certain variety of picture in National Geographic? Assuming those ladies knew they were being photographed and were ok with it being published, where does that land in your flowchart?
Once you have consented to a naked photo being made public, you no longer have any control over how it is viewed. One mans art is another mans porn. You cannot have a photo taken of you and then consent to it publicly only for artistic value. Therefore there is only personal consent in the flowchart, nothing else.
They said in the article they’d prompted it to avoid nudity. I tend to believe that because it makes sense for a promotional website connected to a state agency.
When the context is sexually suggestive and someone is using your face for it, it is.
I agree that the US seems to be too nipple sensitive, but human communication and interpretation happens in context and the context is not good in this case.
National Geographic was the premier source of photographs of bare breasts for earlier generations. I remember a joke about it on Happy Days (1970s show referencing the 1950s, reflecting a common thread through both decades) and certainly, the photographs were popular viewing among the adolescent males of my youth.
Not necessarily. It depends on the expression. Beds are used mainly for resting and many people sleep naked or half naked. Perhaps it is an American thing to see everything as pornography when it involves a naked body but there are many cases when it is not and is art instead (yes - that includes half naked women or men sitting on the bed). Please not that it doesn't follow that her consent is not required in this case.
The only person calling it "porn" is the author of this article published on a conservative site. It's double-cickbait ("how awful!" and "...but I wanna see porn").
There's a heck of a difference between someone voluntarily participating in a photo shoot that involves nudity, and a random person taking your face and photoshopping it onto a nude body.
Consent is the difference between art and revenge porn.
I don't really buy that they accidentally used a model capable of generating nude breasts. This is either bored glowies trying to decide for everyone whether AI is acceptable or the woman is lying.
Is it really that hard to believe that these idiot machines screw up? Furthermore, I find your implication that “bored glowies” would decide to bring down all of AI by posting a topless photo via the Washington state lottery apparatus, of all things, to be preposterous. Surely the Deep State could come up with something more effective.
Nah, it sounds like a simple latent space adjacency problem. When you give an image generation model the "swimming" keyword, that makes it much more likely to produce bathing suits and bikinis, and once it is already generating that much bare skin it is likely to keep going and make some bare breasts (if many images of bare breasts are trained into the model). Their claimed prompt of "swimming with the sharks" also introduces an element of danger and excitement, which some models could also use as a signal to introduce risqué elements. Basically, if models produced only what you asked for they would produce extraordinarily boring results. Many models come up with additional elements to add to the image based on the adjacent elements encoded in the latent space branching off of your prompt.
The main issue here would be their prompt choice ("swimming with the sharks" was a slightly risky prompt. It could have easily gone another way and generated a gory Jaws-inspired image of her getting eaten by a shark). Secondary as an issue is the model that they used (likely a model that has too many NSFW images trained in). The big lesson for corporate entities is to pay attention to the training data that was used in the model (if you want purely SFW results you have to use a model that has nothing NSFW trained in) and also carefully consider what imagery your keywords is likely to produce as a side effect.
Base model choice is a big one. Probably didn’t pay close attention to their negative keywords either. You can specifically tell models to strongly avoid making nsfw images.
Plus most image generation pipelines have an optional secondary step that checks for nsfw images and blocks the output before it reaches the user. They may have neglected that step.
Something nsfw could still have gotten through, but it’s unlikely given the many safeguards that most pipelines have built in.
Yep very true. My guess would be that the feature was implemented by someone very new to this whole AI generated image thing. They had fun implementing it, but did not do a ton of deep research into how to implement it safely, with negative prompts and follow up checks. This is one of the biggest challenges with generative model use. They are powerful, but easy to mess up.
Several of the image generators will create pornographic images due to their training data. This really isn't unexpected if the devs weren't careful with the generation service chosen?
Some of the worst "abuses of AI" are going to be things we've already fully normalized—we just fail to reflect adequately on our culture and in what ways it's broken.