I don't want "safe" AI, I want "accurate" AI. If I want a picture of NYC getting nuked, how can the AI know in which context it will be used? Maybe I'm writing a sci-fi book and looking for cover art ideas.
Who is driving this big push for "safety", anyway? Do consumers actually want safety or are a concern-trolling vocal minority pressuring AI corporations to kowtow?
I personally hate it when SV moral arbiters nanny me.
You should know that your position is societally untenable.
Excellent example: in the early days of Reddit, they very much had a "you can post anything that is not illegal" policy, and this led to subreddits like jailbait, fatpeoplehate, etc. Around 2012 I think (someone can look up the exact date) there was a coordinated effort to shine a bright media spotlight on the "underbelly" subreddits that made a ton of Internet and national news, e.g. features by Anderson Cooper on CNN, prominent op eds in NY Times, etc. Reddit changed course and specifically banned sexually suggestive pictures of minors and those without consent.
I have an opinion, but I am not arguing here for whether the decision was "right" or "wrong". I am simply pointing out that Reddit had 0 choice in the matter. If they had held firm with their "anything that's legal" policy, they would simply not exist today. They would have been deplatformed to the nth degree, and furthermore laws would have (and actually have, e.g. in the case of revenge porn) changed to make their existence untenable. E.g. platforms like Reddit can't exist without Section 230, which is already under attack and I guarantee would have been removed if you had lots of platforms saying they're fine with people posting unknown, sexually suggestive pictures of minors. Nevermind having 0 advertisers willing to pay their bills.
I'm not crazy about "SV moral arbiters" either, but I don't like posts like yours because they pretend a reality that doesn't exist.
> I'm not crazy about "SV moral arbiters" either, but I don't like posts like yours because they pretend a reality that doesn't exist.
I`m not OP, but I don't like posts like yours because they pretend that current status (not even status, your idea about current status) will never-ever change.
> I`m not OP, but I don't like posts like yours because they pretend that current status (not even status, your idea about current status) will never-ever change.
I'm all ears listening to suggestions for how to realistically change the current status, I just haven't heard any reasonable ones that are actually tenable. Also, while I think there are plenty of annoyances with the current status, I rarely think they are as catastrophic as their detractors make them out to be.
I mean to address your questions we have to answer more complex questions of what society and culture is.
At least in my thinking, which may be incorrect, is that you cannot have a society that allows everything. Because allowing oneself to be destroyed is in the set of everything.
I'm not sure how your idealism doesn't fall foul of the paradox of tolerance?
But these aren't really the successful models that further creativity. On the contrary, we see a buzzing community that create thousands of models and they specifically chose base models that are not lobotomized because they simply provide better results.
So you might not want to stifle that with corporate control.
I heavily doubt they would have deplatformed and I don't think there ever was a pandemic of CSAM on reddit. This is a mischaracterization usually the argument to enact greater control.
Could be that reddit did indeed have no choice here, there was a media campaign against more open platforms.
The situation is indeed tenable if platforms just do not cooperate with external pressure of content moderation. There are such platforms and they still operate today.
They did have a choice though. They could have built a federated system in which they as the developers have no ability to censor.
If each subreddit could be hosted by its moderators, you can't apply pressure to the developers because they have no control over it. You can apply pressure to the moderators hosting that subreddit, but they don't care more about ad revenue than keeping their subreddit up because if it's not up there's no ad revenue.
Ah you mean that time before reddit was entirely hivemind?
The most valuable information on the Internet will always come from the chans.
You just need to be able to evaluate everything with critical thinking.
You know, that thing where you can reason about something without necessarily agreeing with it.
Please don't interpret this as bait. From my perspective a vast majority of people have embraced outsourcing their reason.
Things like fph exist exactly because people are told day in and day out that diabetes is healthy. Its perhaps the most ugly manifestation of the natural reaction to this polar opposite called body positivity. But banishing thought doesn't eradicate it. It simply validates all those driven away.
Sadly that is something vitally absent from borg based Internet.
Some anonymous Swedish guy said that a virus was coming early next year in September 2019 and pleaded with people to not take the vaccine that would come in late 2020.
I don't know what point you're trying to make. The baseline rate for schizophrenia means I should expect about 24 million people saying equivalent things about each year.
You have still completely failed to communicate whatever point you were trying to make. I have absolutely no idea what idea you were attempting to convey.
Well what was the value? The prediction was wildly wrong, supposed to come from "a pharmaceutical company working with military op's in a west coast state." That's about as far from China as you can get.
And the conspiracy theory that the vaccines are the real toxic part is just dumb. We know from hospitalisation and ICU figures that people were getting sick well before a vaccine was delivered. Plus, it requires me to believe that everyone in NHS Scotland was lying to me for completely unknown reasons.
In order to believe in these theories I am required to shut off my brain and not think about what happened in other countries, other health services, even though there are plenty with full information in English. It's just daft.
AI corporations do what corporations do, which is try to avoid lawsuits. Or worse: the well-known, incredibly arbitrary extrajudicial punishment that can be summarily meted out by the duopoly of payment service providers.
here's what i speculate (i'm not an insider or anything, pure speculation).
the "true believers" at OpenAI mostly don't care if bad images get generated. they are worried about "safety" as in the sci-fi Terminator scenario, not "safety" as in "avoid harming people with offensive or unpleasant images".
however they see controlling a cutting edge AI towards some goal to be an important thing to study, and they want to practice doing it, and to do that they need to pick some arbitrary goal.
the arbitrary goal becomes this type of "PG-rated-only" censorship, because it helps avoid bad press coverage, makes it easier to raise money, etc. but they don't sincerely care about it. some others in tech do sincerely care though.
I think it's just them being scared of models running on their servers producing things that cause bad PR. And the reason they're not willing to not run things on their servers is competitive advantage as well as just extracting more money out of users that way. (I definitely don't think actual safety is the concern.)
> I personally hate it when SV moral arbiters nanny me.
I've been annoyed with them pushing their mores on the global internet since at least 2010.
I suspect you'd also hate the substantially different mores that I would have in their place.
> Who is driving this big push for "safety", anyway? Do consumers actually want safety or are a concern-trolling vocal minority pressuring AI corporations to kowtow?
1. Yudkowsky, whose general vibes are an important part of the discussion for about half the people who work on these AI in the first place, even where they disagree in particulars.
2. Anyone who noticed the way biases in training data propagate stereotypes, an observation which substantially predates any of the currently interesting generators.
3. Anyone who has been on the receiving end normal old fashioned inappropriate content, or who is the parent or guardian of such a person.
4. Also the usual concern trolling types, as some people have already been arrested for using such models to sexualise specific people including, indeed, at least one case where it was a minor.
5. Anyone who can see the potential for these models in automated personalised propaganda.
6. Anyone concerned with the potential for a fully automated system that A/B tests with a constant stream of newly generated output until it finds a super-stimulus you can't help but engage with.
These groups don't all talk to each other, and in many cases dismiss the severity, likelihood, and timescales of each other, though often still using overlapping language that makes any conversations on these issues even more difficult than figuring out exactly what someone who just used "woke" as a pejorative is actually objecting to.
Moral panic is driving it. Every time somebody manages to make AI bot say something outrage worthy, all the press is throwing a fit - how dare you, think of the children!!! And since the western culture is ruled by concern-trolling vocal minorities now, corporates predictably kowtow. Nobody wants to be canceled, or paraded before Congress as somebody who let the evil robots destroy our children for profits.
SV has nothing to do with starting it though, they just follow. Look at your newspaper, college campus and talk show to find the leaders and the fanners of the flames.
If "unsafe" AI becomes ingrained then it really will be banned or have some arbitrary artificial limitations on power ("maximum 100,000 parameters" or whatever).
They are trying to leapfrog the inevitable backlash from governments by saying they are doing the right thing and take it seriously yadayada.
I see this comment over and over and over here on HN and it really blows my mind that people don't get it. These models are made so that enterprise clients will pay a lot of money to use them. I was a director at S&P Global and if you brought in software that could make pornographic images, violent images, or introduce copyright violations into our research you would be laughed out of the room within exactly 1 minute. The push for "safety" is that we live in a capitalistic society and the people will pay the most money for access to these models will want them to be "safe". If you don't like it, go ahead and create a competitor and see how many sales you get to enterprise customers.
SV isn't trying to be your nanny. They are trying to make money. I'm shocked this isn't obvious to well-educated people who visit this forum.
> These models are made so that enterprise clients will pay a lot of money to use them.
That's not obvious to me. Why wouldn't they eventually be targeting consumer market like search engines?
Speaking of which, why don't search engines like Bing and Google forcibly censor pornographic queries? Why am I allowed to search and view pictures of Xi Jingping juxtaposed with Winnie the Pooh on Bing and Google without my hand getting slapped? Why are search engines exempt from getting roasted by the media for serving up inappropriate results in response to inappropriate queries?
Who is driving this big push for "safety", anyway? Do consumers actually want safety or are a concern-trolling vocal minority pressuring AI corporations to kowtow?
I personally hate it when SV moral arbiters nanny me.