This presentation briefly mentions but then seems to mostly forget about "Elsagate" which they call the "Peppa Pig scandal".
James Bridle argues convincingly that the genre of bizarre YouTube videos which appeals to the toddler reptilian brain ( https://medium.com/@jamesbridle/something-is-wrong-on-the-in... ) is not created by hostile or evil actors but instead has evolved orgnically based on what stuff toddlers want to click on. Kids' click patterns reward more video themes like "Elsa tied up on train tracks kissing Spiderman", so the content industry crams more of that stuff into its new content.
The result, after a few iterations, would not have passed editorial controls at 1990s Nickelodeon (!), which would normally have halted the feedback loop, but with no one at the helm -- to "censor" or otherwise exert editorial control -- YouTube's kid-targeted videos are just a whole forest of weird.
Does YouTube want to allow their platform to become a laboratory for rapidly discovering local maxima in very young children's fantasy worlds? Do they have any choice? Should they step in and publish rules for what children's content is allowed? Should they hire some kind of human curator or editor to enforce those rules for child-focused videos? Should Web platforms act in loco parentis?
In this cases, in "the Peppa Pig scandal" style situation, the producers are machine-generating content that gets clicks and the consumers are children.
When the issue is the viral proliferation of "fake news" and hate speech, the content producers are people or state propaganda apparatuses, and the consumers & re-sharers are grown adults.
It seems like it's a different topic with maybe different guiding principles to decide how & whether to censor these different groups of consumers & producers.
Kids' videos on YouTube Kids and Censoring the internet are very different things.
It's like saying, we don't wanna sell alcohol, because if we did, we would have to sell alcohol to kids.
On a platform that caters to everyone and does not have age restrictions, you would have to sell alcohol to everyone and a subset of everyone is indeed children
By the way, is there any evidence of this? Or are such claims perhaps similar to how old people never understood new things and claimed that "punk rock and heavy metal could seriously tamper with kids' mental health"?
This document is showing the hole in that position -- an outright attack (elsagate) aimed at children. A cursory inspection by the parent sees the child watching a harmless Peppa the Pig video, while in fact she's watching snuff.
This is a problem that Google has to address somehow (because that is what is demanded of them), while not censoring things aimed to adults. That's why the conclusion is a call for consistency and openness.
That isn't a hole, it's an iterative process. Parents didn't know there was snuff on YouTube Kids. As soon as they find out, their kids are not allowed to use it anymore and with declining viewership, whatever garbage feedback loop allowed that to happen is destroyed.
As a parent, I don't have time for this shit, so I did not ban such movies. I banned YouTube, all of it.
People put their trust way too much in free market competition. You know, if consumers were actually conscious of their choices and free market competition actually worked for pruning the weeds, we wouldn't have diabetes or obesity or pollution or global warming.
> Kids' click patterns reward more video themes like "Elsa tied up on train tracks kissing Spiderman", so the content industry crams more of that stuff into its new content.
To be fair many of the Tex Avery and Tom and Jerry cartoons with which almost everyone grew up with were a lot more wild than that, thankfully they weren’t censored back when we were kids.
> Did you watch them? Some of them are literally snuff, with tons of gore. The stuff of nightmares
I don't have kids so I only watched what I could quickly find on a simple YT search, and I remember watching that spider man scene the OP mentions (hence why I commented) which I didn't find that scary (even though it was quite tasteless). The gore stuff (probably meaning blood showing and similar stuff) should probably be restricted, of that I agree.
I’m on the phone and too lazy to copy-paste video references, but as I remember there was lots and lots of violence that I don’t think would pass many of today’s censors. I also remember a couple of episodes involving a “lady cat” (Tom’s love interest) presented as a “femme fatale” which would push some hot buttons in today’s world (sexism, kids being subjected to watch sexual innuendo scenes etc). Ah, and there were also those early 1940s episodes where Tom’s owner is this black servant lady whose face is really never shown, that would generate some really heated conversations were it to be released in today’s political and social context.
The problem is that the whole YouTube UI is machine learned to maximize engagement, so that they can show a lot of ads. The algorithm will do whatever it can to get people to watch more YouTube. We notice the weird results when it comes to videos for toddlers, but the same thing is happening to adults, we just don't see it in quite the same way -- it's always easier to see self-destructive behavior and make attributions from the outside.
Ultimately, interacting with software that has been machine learned for a metric that doesn't serve you or your kids' interests amounts to deliberately swallowing a parasite.
I realize its an often misused excuse for passing various bogus regulations. But your dismissive one liner is disparaging the comment while completely ignoring the massive context and reasoning provided; this is not the kind of conversation we expect of here.
"Think of the children" as a justification for censorship is still "think of the children" as a justification for censorship, regardless of the context. Dressing it up in an attempt to make it more palatable and reasonable-sounding doesn't change what is at its core. And that's a nice touch, painting me as an outsider by pointing at the sign with the rules.
The document is a showcase of trends, as well as pressures being applied to Google from various groups. Like it or not, Elsagate is a well-documented phenomenon, and this document is treating it like the problem that it is.
This is not a generic "think of the children" reasoning to ban things that adults enjoy -- Elsagate videos are targeted at children, they exploit various mechanisms to make children watch them (some children psychology, but mostly YouTube's recommendations algorithm.)
But the actions of tech giants talk a different language. Alex Jones, crazy as he may be and scamming people with his "man-pills", didn't address children. There were others that have been excluded that were less crazy, just pointing out a prominent example.
In consequence I doubt the intentions are as clear cut and restricted to these cases.
Children are easily distracted and are easy target for clickbait, that is true. They are also more affine to access information their parents want to restrict. I think that is true even for people here. And you did that too.
Alex Jones was not banned for targeting children, and "Think of the Children"-style arguments were not used in his case.
We're talking about elsagate here, right? "Peppa the Pig" snuff videos? They're aimed at toddlers. They game the algorithm because toddlers select videos from YouTube's suggested videos basically at random, so all someone who wants to monetize a video has to do is make sure to hit as many categories as possible. And they make it snuff, because.. Well, I'm not sure why but they do.
This isn't about teenagers or even pre-teens going behind their parents backs, this is about toddlers vegging out in front of YouTube on a tablet. Basically this generation's TV babysitter.
Do you think adults are watching these kinds of videos? These aren't content for everyone that is being censored because of the needs of children; This is content explicitly for children that is being censored because of the needs of children.
I'm actually glad to see the conclusion. Everybody wants to censor what they don't like. There are people who believe free speech is not the ultimate value, but rather to protect people from 'harm'. In this case 'harm' being ideas they don't agree with.
The best stance is not to take a side, but to make sure both sides are civil in the expression of their beliefs.
> "Everybody wants to censor what they don't like."
This should be top comment on all HN discussions involving content moderation, so people can read it before they respond and think about whether what they are advocating makes sense. People these days so effortlessly are able to make that huge leap from "I don't personally like this thing" to "This thing must be suppressed/illegal for everyone."
You can say that, but from the beginning, governments have put limits to speech based on "harm" from speech. Slander, libel, threats are all illegal due to the contention they cause harm. Taking no side is also similar to this, taking no side in a conflict as long as they are civil requires one to decide what is "civil" in the first place. Are comments about race considered "threats" if one person is of the race targeted? How about coarse comments like telling someone to "kill themselves." It isn't so simple and even being neutral is a stance of a kind.
BTW, I don't think being neutral in a conflict like censoring or not is the wrong stance, but it IS a stance with its own value judgements.
The difference between censorship and all of those things is that they aren't prior restraints. Nobody filters what you say to make sure you aren't saying something defamatory. You say it first, then someone has to take you to court, where you have the opportunity to defend yourself.
And somebody has to care enough to shut you up to go through the effort of taking you to court, and then they have to win.
Some kind of private sector prior restraint apparatus making automated censorship decisions with no due process... is not that.
I feel like you potentially have a point but the moderation the slides talk about are also posterior to the speech. For example, tell someone to kill themselves, get reported, get shadowbanned. Deciding what is "wrong" or "uncivil" or "out of bounds" is the value judgement, like deciding defamation is or threats are.
> I feel like you potentially have a point but the moderation the slides talk about are also posterior to the speech. For example, tell someone to kill themselves, get reported, get shadowbanned.
When the penalty is censorship of future speech, it's still a prior restraint. And shadowbanning is obviously not compatible with any kind of due process or even an opportunity to know that you've been accused.
> Deciding what is "wrong" or "uncivil" or "out of bounds" is the value judgement, like deciding defamation is or threats are.
Which is why the traditional categories have been narrowly drawn and limited to things that are as apolitical and non-partisan as possible.
I mean, how is making libel unlawful not censorship of future speech under your definition, or making threats unlawful censorship of future threats? How is this different?
>>You say it first, then someone has to take you to court, where you have the opportunity to defend yourself.
What if you say it and someone is seriously harmed or dies as a result?
The fact is that we have prior restraints for all kinds of things because we realize they are dangerous and have the potential to cause harm. I don’t see why speech should be different. Words can kill.
> What if you say it and someone is seriously harmed or dies as a result?
Frankly, that's the price of freedom, and it's the reason that fighting to maintain our freedoms is a never-ending battle.
Allowing the public to own and operate cars likely results in more deaths from crashes, but it also allows people to travel to and from arbitrary places on their own schedule.
Allowing people to own general purpose computing devices allows people to develop harmful software tools, to route around safeguards, to communicate clandestinely about illegal activities, and to design CAD files for 3D printed guns. Of course, it also brings us the ability to build software that works the way we want (or need) it to, to build successful businesses, to preserve ephemeral cultural history, to expose official corruption, and to organize political dissent through encrypted back-channels. You cannot have the benefits without the risks.
Allowing people to speak freely may result in someone feeling emotionally attacked, it may result in convincing someone that a false idea is true, or it may give people a flimsy excuse to engage in physical violence. But the benefits are innumerable. Unrestricted speech allows us to voice our concerns as citizens and participate in the political process without being silenced. It prevents the powerful from using concerns of "safety" to suppress dissenting opinions. It serves as a safety valve that helps potentially dangerous ideas rise into the sphere of public debate, where they can be taken apart (or even just "rounded off") before they result in something like genocide.
You can't stop speech, period. You can prevent it from happening publicly, on a temporary basis, but in the long term that kind of repression leads to violent revolutions.
That said, there must be some limits; I cannot run you over with my car or use my computer to hack the Pentagon, and I cannot threaten your life. But these limits must be explicit and narrowly constructed or they will be abused by the powerful. That means that there are necessarily "edge cases" that have to be decided by a fair and impartial process! This is unavoidable, because otherwise some people will continually step just over the line and then claim innocence. There is no way to "fix" this; it is always a messy process because of the enormous complexity involved in the world.
It's always tempting to trade freedom for safety, especially for victims who may not have suffered as much in a less free world. But we have to push back against that sentiment, or the future will not be worth living in.
It serves as a safety valve that helps potentially dangerous ideas rise into the sphere of public debate, where they can be taken apart (or even just "rounded off") before they result in something like genocide.
How's that working out lately? I think ought to look at the way social media has already become a vector for genocide as in Myanmar. Here in the US we're currently allowing some kinds in immigration detention who have been separated from their parents to be permanently adopted by American families, which (by the very international standards we helped to establish) is a massive human rights violation.
> How's that working out lately? I think ought to look at the way social media has already become a vector for genocide as in Myanmar.
One of the only ways the public (in the west) was able to find out about this was through the use of social tech. It just wasn't on anyone's radar before that. Once they are aware, people can bring pressure to bear to stop the genocide, which has been happening.
I also believe that the role of social media as a "cause" of genocide was overstated. That said, I do think that modern social media is flawed and won't last too much longer in its present form.
> Here in the US we're currently allowing some kinds in immigration detention who have been separated from their parents to be permanently adopted by American families, which (by the very international standards we helped to establish) is a massive human rights violation.
>How about coarse comments like telling someone to "kill themselves.
I think a few conservatives were directly told to do just that due to the SCOTUS appointment furore --have there been significant calls to censor those opinions?
It's like everyone is for Censorship unless they are the targets of Censorship. And in this case they are not calling out their own lot very vehemently.
Stuff like this is a huge indicator that whatever everyone's intent going into censorship, it will invariably end with powerful, entrenched stakeholders using the censorship tools to dig in and start purging outsiders and members of opposing factions. In politics, business, culture, you name it, this will be the end result.
People universally believe they are better than their historical counterparts, that they will succeed where history has failed, that this time it will be different. When really all they're doing is carving another notch on fascism's belt.
Cheers. Come up to Alaska and I'll buy you a drink and take ya fishing while we're at it.
The recent heavy push for greater ideological censorship will end up driving centrists to places that cator to extreme content. This is one aspect of the problem which has led to further increased polarization in modern American politics.
Facebook, YouTube, and Reddit are too censorious and do not fill the role of a modern free speech platform which must exist. The biggest problem is gaining traction while a large portion of early adopters are extremists banned from other platforms.
My only hope is that the censors will piss so many people off that real free speech platforms like https://d.tube/ can reach critical mass.
Trump very well may be fascist. He seems to want America to be more like North Korea anyway. And what was the context of your comments? Maybe copy/paste? Genuinely curious.
I believe I saw a headline today or yesterday regarding someone being fired due to remarks along those lines. Perhaps not "kill yourself" per se. I think the remark was characterized as a call for assassination.
I think what you're referencing is probably that somebody was recently put on administrative leave pending investigation for tweeting "whose gonna take one for the team and kill Kavanaugh?"
The google head of design told Repubs to all go F* themselves to Hell (not even qualified it like, "oh, the bad ones, I don't like, nope ALL). I guess that's not so bad (for an EXEC at the very company considering Censorship). But anyway, calling for assassination is different from asking someone to off themselves.
I'm speaking for myself here but I think the Russian bots as well as astroturfing bots need to be censored. And not censored by government regulation but by the terms of service. There are many "active measure" Russian bots active on twitter. Twitter doesn't seem concerned at all mostly because those Russian bots are padding their numbers same goes for FB and reddit.
If twitter/FB/reddit had done a better job with banning spam bots originating from the GRU offices then you wouldn't be getting called a Russian bot. The effect of Russian bots is multi-pronged because it dilutes the discussion into figuring out whose is a person and who is a bot.
> half the time I utter a conservative opinion I'm accused of being a "russian bot"
It's not a "conservative" opinion per se, it happens with the left too, (as opposed to (neo)liberal), think Sanders crowd being called sexist/bros by the Clinton crowd etc. Basically if it's not a centrist opinion you're out. It's called the overton window.
Now the cat is out of the bag, freedom of speech is just a lofty ideal that only worked when all media was controlled by corporations that chose what people listened to most of the time. Now they are trying to put the cat back in the bag. Good for reiterating corporate control over thought and speech.
I agree, but I think there is also another effect at play here, which is the end of private circles where opinions can be exchanged "present company excluded". If you watch old tv, for example, you'll be appalled at the sort of things (harsh, elitist, dismissive of the general opinion) that could be said even in a broadcasted event- the audience was self-selected, recordings were rare and generally unavailable, few were reading any sort of newspaper. People were entitled to unpopular opinions because they had a limited circulation.
But today, when everything is recorded, cut and pasted, infinitely available and replayable, every statement made in public must be crafted to be acceptable to absolutely any audience in any context and at any time. Those who contravene the rule are at terrible risk of having their public, work and private life destroyed by public uproar- effectively resulting in a strict censorship of opinions.
Which is why we need to remove their noise from the system. A real debate of ideas is impossible with you have bad-faith actors masquerading as participants.
It's mostly on twitter or reddit. As a side-note another thing that bugs the shit out of me is when someone looks at my history to find a reason to disregard my comment. (Not accusing you of this by the way)
"Oh I thought I was going to have to respond to you until I discovered you've posted somewhere I don't like."
followed up by a flood of down-votes accusations of being a bot and sometimes a ban when I was posting legit content and previously with a very positive score.
Happened to me twice last month. Once for bringing up the Code of Conduct on /r/linux.
edit: here's a perfect example that made it into print and halfway around the world while the flawed reasoning and debunking has yet to gain traction.
Spam filtering isn't censorship. It's not censorship when the recipient doesn't want the message. And botspam is spam.
The difference is when you have a message that some of the actual human recipients actually want. Then it isn't spam, it's just information some people disagree with.
If you don't like what someone has to say, don't listen to them. But you have no right to tell someone else they can't.
How are we defining what a bot is? From what I can tell a bot seems to be astroturfing done by Russians working for a quasi-propaganda office. But, there are many state sponsored "bots" from all kinds of places. Promoting all kinds of things and political views, from allies to foes.
I think I am okay with contents from bots if they are marked like advertisements ("sponsored contents", etc.). For example, "post from ABC" could instead say "post from ABC, a bot account of XYZ".
Nobody in the real world - outside the ivory tower - actually cares about tracking the reputation of a PAC that was spun up for an election in order to run political ads, and will disappear in two months.
Not really. There are disclosure rules for paid advertisements. People can express whatever they want through a bot, as long as they also make it clear that this expression is being delivered by a bot.
Even if their sole purpose isn't any 1 side or argument, but just to create as much chaos, anger, and confusion among the masses as roboticly possible?
I've investigated claims of Russian-controlled political bots on Twitter several times. The stories always collapse when inspected carefully. Indeed it's common sense it would be this way because most of these stories appear to blithely assume Russia has built bots that can pass the Turing test, which is implausible.
I don't believe there is any need to censor such "bots" because they don't exist. Instead they're the creation of:
• Bad, agenda driven journalism.
• Bad, agenda driven academics who are looking for topics to write papers about that might have social impact.
• A desire to de-humanise and silence people with conservative opinions.
Clear examples of these three are described in my essay here:
"Just go to a random [Trump] tweet, and just look at the followers. They'll all be like, guns, God, 'Merica, like, and with the American flags... like who says that? Who talks like that? It's for sure a bot."
Obviously this Twitter engineer knows full well they aren't bots. What he means is, we ban accounts we think are bots without worrying about it, so if I can create a confusion between "bots" and "conservative Trump supporters" then maybe they'll get kicked off Twitter more easily.
Bots won't often have a viewpoint. Russia isn't pushing 1 side or the other with these things, but creating mass hatred by taking as extreme a view (left or right) as possible. They're built to pit people against each other, and they're doing a fantastic job. I'll leave it to you to decide if that's valuable or not.
This 100000% over. Otherwise they are becoming the new kingmakers, manipulating people often without them even knowing. Incidentally there is a good documentary that is now on Amazon Prime called "The Creepy Line" that is worth watching.
>The best stance is not to take a side, but to make sure both sides are civil in the expression of their beliefs.
I agree with that. It would help if these companies stopped seeking "virality" and "eyeballs" or whatever metaphor you want to use of user engagement and "notoriety/reward".
Otherwise, this is exactly the same argument China's (or Russia's) censors would make. Absolutely not different in any way.
What both sides though? The alt-right isn’t in opposition to anything mainstream, they are anti-democratic and should be treated the same way antifa, farc, isis and others are.
Every time democracy fails to shut down movements like the alt-right, things end badly for democracy itself. The most used throve is stuff like nazism and fascism, but history actually have a lot of better examples.
Because both Napoleon and Caesar effectively ended democracy with applause, unlike Hitler.
I think it’s extremely dangerous to treat anti-democratic forces as equal to democratic ones. I’m a conservative by the way, so it’s not like I’m not conserved about the liberal bias in the tech sector. But the debates we have these days, about forcing platforms to include outright anti-democratic values is crazy.
The problem is that then you have to decide what's 'anti-democratic'. You won't find 50 opinions that differ from your own. On top of that, it can then be used as justification to censor almost anything you don't like.
Sure, right now you'll think there is a clear criteria. But in 25, 50, or 100 years from now, people could have easily twisted it to mean whatever they disagree with.
This. In some ways, we sort of see that with the PC movement itself. Like when people attacked Scott Kelly for quoting Winston Churchill, the OG anti-fascist, who opposed real life fascists, for God's sake: https://www.bbc.com/news/world-us-canada-45789819
Why aren't those criticisms valid? Scott Kelly seems to think they were.
Edit: quoting him to make this clear: I did not mean to offend by quoting Churchill. My apologies I will go and educate myself on his atrocities, racist views which I do not support.... It's a tweet embedded in the same article.
People's view of leaders change over time. In Australia Churchill was seen as an evil colonialist for his role in the Gallipoli debacle until he was rehabilitated during WW2.
I don’t decide what’s anti-democratic though, we have guidelines for that provided by the United Nations. I mean, it’s been seen and done before throughout history, so it’s not exactly hard to spot.
Attacking the free press is one criteria.
Labeling everyone that disagrees with you the slightest as enemies, is another.
I mean, assuming this leak is real, then googles stance on censoring is extremely centrists, letting them appease both sides of the political spectrum. But that’s not what breitbart or the alt-right really wants, they want google to only appease their radical views.
Ironically, it’s a lot easier to get banned from alt-right forums than it is from YouTube. All you have to do, is disagree with whatever dogma they spin, because they aren’t even remotely interested in a democratic discourse.
Of course, looking at history, no one have ever really stopped movements like the alt-right early enough to save their democracy. So America is probably rather doomed.
> Labeling everyone that disagrees with you the slightest as enemies, is another.
You do realise that you're
- attacking the free press by asking for censorship?
- Sort of labelling other people that disagree with you (in this instance: disagree with democracy) as enemies (of democracy)?
That's of course very inconsistent.
But it also assumes that "the current form of democracy is the best we can ever have".
Democracy needs criticism, especially since what we have is a 19th century system that assumes information dispersal and real-time voting is practically is impossible.
Well, yes. But I don’t think censoring the alt-right is problematic if they can’t stay within whatever policies companies set. You’re not denied access to a supermarket either, but if you start intentionally pissing in their floor, then you’d get thrown out.
Of course I come from a region of the world, where people like the alt-right won, and eventually started putting centrists in prison camps.
You seem to be very confident that you can so accurately judge whether someone is "alt-right"(a nebulous label, at best) that you can censor them preemptively. Where does this confidence come from?
Further, does espousing an anti-democratic idea make someone alt-right? How sticky is the label? What if you did it 10 years ago? Especially with how much of our lives we record nowadays, an accusation like this becomes an easy-to-wield cudgel to shut down political opponents. This rapidly leads to a race of gotchas, where we look for anything that lets us cram someone into one of the "bad" labeled boxes(racist, sexist, alt-right).
Lastly, how effective is the censorship you are proscribing? Can you achieve total censorship within the scope of a democracy? Does it actually inhibit the spread of the ideas you loathe, or simply put them out of your sight? What about the radicalization you are causing by censoring these people? You haven't convinced them to stop, you've just muzzled them publicly, but they can still create private clubs and gatherings. What problem have we solved after implementing this censorship?
Just because a belief is civilly expressed does not make it less harmful. For example: people casually believing that the Sandy Hook shooting was a conspiracy drummed up by the government is actively harmful to the families affected by that tragedy. What do you say to those people? How do you solve that problem? And how do you define 'civil'?
Because different platforms have different ranges of civility as well.
There are thousands of conspiracies floating around. Should we consider them all harmful and ban them?
Who decides when a conspiracy becomes harmful? Should we ban flat-earthers from posting their views? After all, if everyone believed it, we would set science back countless centuries.
The way to combat false ideas is with truth; a culture that is always searching for the truth. That can only be facilitated by unhindered speech.
I don't think citing a random saying is a great way to rebut an argument, let alone support a claim of naivety.
Here's another saying for you: you can fool some people all the time, and all the people some of the time, but you can't fool all the people all the time. In other words, eventually the truth wins out.
Do you consider American culture to be one of the more free ones? If so, then why do these conspiracies exist? Why have flat-earth conspiracies and anti-scientific dogma thrived? Why do we have a president that outright denies scientific fact?
If combating false ideas with truth worked, then surely these problems shouldn't exist in America, the land of the free. In the arena of free speech, only the values with the most truth should thrive. Yet that's clearly not the case.
This is the paradox of tolerance. We can say that unhindered speech is the way of countering harmful speech, but then I ask you: What led to the rise of such harmful speech today?
Because that's how the process works. Liars lie and evidence proves them wrong. It doesn't make them disappear from existence, it just makes it so you can discover the truth.
Without free speech there are no fewer lies, all you do is suppress the truth. The Party's version is the only version even when it's fiction.
I don't really buy "unrestricted free speech is worth letting people die over". Sure, censorship can be harmful, but if the best argument your can put forward is a slippery slope, thats not particularly compelling.
Hypothetical harms are generally less harmful than real ones.
> I don't really buy "unrestricted free speech is worth letting people die over".
I don't really buy "restricted speech is worth letting people die over".
> Sure, censorship can be harmful, but if the best argument your can put forward is a slippery slope, thats not particularly compelling.
What slippery slope? Censoring climate scientists is the first thing politicians in the pocket of oil companies would do with censorship powers.
> Hypothetical harms are generally less harmful than real ones.
The actual harms of censorship are widespread and well-documented. If you want to go straight for the serious examples, look at The Great Leap Forward, Stalin's purges or the holocaust. In each case there were not only direct large scale executions of political dissenters, the censorship allowed other atrocities to be kept quiet. Much of the scale and inhumanity of the holocaust wasn't discovered until the end of the war as a result of Nazi censorship, and keeping it quiet allowed it to continue for longer with less opposition both domestically and internationally. For example, the US could have entered the war earlier.
But it also goes all the way down to pedestrian squabbles where people disagree over matters of life and death at smaller scale (e.g. the safety of a building or a bridge). Censoring true facts has literally been fatal countless times.
>I don't really buy "restricted speech is worth letting people die over".
Only one of our positions is hypothetical.
>What slippery slope? Censoring climate scientists is the first thing politicians in the pocket of oil companies would do with censorship powers.
Says you. You're running under the (faulty) assumption that such censorship would be unregulated. I also wouldn't support such a system, but thankfully that's not what I'd propose either.
Yes, the actual harms of unrestricted censorship are widespread and well documented. But most of the EU censors things that are free to say in the US (antisemitism, as an example), and yet somehow manage to rate higher on independent ratings of press freedom[0].
You're argument reduces to "unregulated power is bad". Yes. Agreed, completely. Unregulated power is bad.
>Much of the scale and inhumanity of the holocaust wasn't discovered until the end of the war as a result of Nazi censorship, and keeping it quiet allowed it to continue for longer with less opposition both domestically and internationally. For example, the US could have entered the war earlier.
This is some strong historical revisionism. No, the US just took a strong turn toward isolationism post WWI and the great depression. The atrocities of the holocaust were not some perfectly kept secret. I'd encourage you to read up on US foreign policy in the 30s, as well as general sentiment among the population [1]. Among other things, antisemitism, worry about another economic downturn, and a strong isolationist ideal underlined by the opinion that "We shouldn't send American boys to die solving a European problem"[2] were the main reasons the US didn't enter the war.
Even when they did, it wasn't out of a sense of civic duty to save the Jews, it was because "oh crap, Germany could actually threaten the US and the world".
Allied governments absolutely knew about the crimes the Nazis were committing, they just didn't care. The population didn't really care either.
Immediately jumping to "the Nazis and Stalin censored people, so censorship is bad" isn't an argument, its fearmongering, especially when the claimed impacts of censorship aren't really true.
> You're running under the (faulty) assumption that such censorship would be unregulated. I also wouldn't support such a system, but thankfully that's not what I'd propose either.
It's impossible to actually regulate censorship because for regulations to be sound they have to be vigorously debated, but the public can't, by definition, debate whether something should be censored if nobody can talk about it because it's being censored.
> Yes, the actual harms of unrestricted censorship are widespread and well documented. But most of the EU censors things that are free to say in the US (antisemitism, as an example), and yet somehow manage to rate higher on independent ratings of press freedom[0].
In-spite-of, not because-of. And the US scores poorly largely because of this new radicalized censorship where crazy people are now committing acts of violence against journalists that publish stories they disagree with, and because of all the abuse of power (arresting journalists on charges that won't stick as retaliation for undesired coverage/investigating). Succumbing to populist censorial sentiment or creating new opportunities for more of that abuse obviously wouldn't help matters.
> No, the US just took a strong turn toward isolationism post WWI and the great depression.
Those were the reasons they didn't enter the war when it just seemed to be a war. Knowing what was actually happening could have overcome that sooner, or at a minimum spurred people to do more to facilitate the escape of Jews from the affected countries.
> The atrocities of the holocaust were not some perfectly kept secret.
Their full scope was not publicly known until near the end. We recently learned that the government knew earlier:
But even that was the year after the US entered the war.
> Immediately jumping to "the Nazis and Stalin censored people, so censorship is bad" isn't an argument
"The Nazis and Stalin censored people and they were bad so censorship is bad" is not an argument because it applies equally to building roads or using radios.
"The Nazis and Stalin censored people and as a direct result of the censorship more people died" is a cautionary tale and a strong indictment of censorship.
I lost my long and well sourced post responding to this, so I'll be brief(er).
To begin, the atrocities of the "Holocaust" as its commonly referred to, as the large scale extermination of the Jews, didn't really begin until after the US entered the war. Before then, knowledge of the the general plight of Jews in Germany and Europe were pretty much common knowledge. So your claim that broader knowledge of the Holocaust might have caused the US to enter the war earlier has a flaw: There was, at the time, not yet a Holocaust for people to be aware of.
As for everything else, I'm actually perplexed by portions of your argument, because you seem to be arguing that the government should not censor people, and that populist censorship via speech is bad. It seems pretty clear to me that if the US outlawed threatening journalists, it would lead to an improvement in press freedom, and from your comments, it sounds like you would agree with this, since its a form of "radicalized censorship". But I think you would also argue that such a law was itself censorship and unethical. This is especially true since you conflate censorship and suppression.
Censorship is but one form of suppression. I can suppress an idea without censoring it by generating so much nonsense that an observer can't readily discern between fact and fiction. You clearly have an objection to the suppression of facts, and I would agree that that's not a good thing.
But where I disagree with you is that censorship necessitates the suppression of facts. In fact I think often, well "aimed" censorship can improve discourse and prevent the suppression of ideas.
>It's impossible to actually regulate censorship because for regulations to be sound they have to be vigorously debated, but the public can't, by definition, debate whether something should be censored if nobody can talk about it because it's being censored.
I do also want to call this out specifically. We are, right now, albeit by proxy, vigorously debating the right to support Naziism. Yet, I don't see any reason for an observer to believe that either you or I ourselves supports Naziism. Censoring "support of Nazis" does not necessitate censoring "support of the right to express support of Nazis".
If by 'harmful' speech you mean conspiracies, I would argue they've always been around. It's not a new problem. People have always filled the gaps in their knowledge with unjustified beliefs.
However, your answer would be to deplatform those who would post such ideas. Do you honestly think that would make it better? Did deplatforming Alex Jones help people? It just galvanized those who believed there was a conspiracy and made him all the more popular.
How about we take on the hard task of trying to actually reach those who we believe have false ideas. For example, how do people respond to flat-earthers? Constant ridicule and laughter. The reason they believe in these theories is a lot more complex and nuanced than merely that they are 'stupid'.
Deplatforming Alex Jones gave him a slight bump in popularity after it occurred, followed by it falling off a cliff shortly after.
Also in your example of responding to flat-earthers what other alternative is there when they refuse to listen to reason? When you can't engage with them on any level?
Why is it my responsibility for example to be the one to engage with people who might view me as being a lesser human being due to my skin color or sexual preferences?
Could that have been because the platform people would use to see his new content was no longer able to be used for this purpose, and people have been cordoned into walled gardens so long the general public is no longer aware of alternatives?
This is the paradox of tolerance. We can say that unhindered speech is the way of countering harmful speech, but then I ask you: What led to the rise of such harmful speech today?
You point at anti-science conspiracy theories as a failing of free speech, and you're probably right, but what would you have us do otherwise? A world in which some central authority dictates that certain opinions are never allowed to be expressed, to me, seems like an absolutely horrible place to live, even if it meant that antivaxxers couldn't spread their crap anymore.
Furthermore, there is no such thing as a perfect political system. It's not enough to point at what free speech fails at, you also need to suggest what we should be doing instead, and how the failings of that system wouldn't be worse.
It's a pretty good comparison to democracy, in my mind. It's the worst system ever invented, except for all of the other ones we've tried.
You live in that world right now. We have a central authority that says 'you're not allowed to call black people certain words'.
I always ask this but do you believe that was a mistake? Do you think people have that specific right to exercise their freedom of speech?
Obviously there are degrees of harm to speech. We've already collectively determined this as a society. And to me, anti-vaxxers are a great example. We arrest anti-vaxxers if their children suffer due to child abuse. Do you believe that to be a mistake, that they don't have the right to exercise their own beliefs?
What central authority would that be? Because I'm pretty sure I can find speech along those line on the top five sites on the internet with little effort.
I always ask this but do you believe that was a mistake? Do you think people have that specific right to exercise their freedom of speech?
I think I'm unclear on what you're asking, but as an American who sees a great deal of value in that approach, and seeing the fruits of the opposite approach (my go-to example being the UK, where libel laws are significantly more open for abuse), then, yes, I do think people have that specific right - on the level of human rights, even.
Do you believe that to be a mistake, that they don't have the right to exercise their own beliefs?
That is a different question - speech and action are two different things.
> you're not allowed to call black people certain words'
It’s actually much worse, the restriction itself is highly racist; only blacks are allowed to use the “n-word”, while whites (and other races I presume) aren’t.
The true reality is that the most persuasive ideas survive, not the most truthful ones. The challenge for the scientific community is to make sure to remember that persuasion matters as much as logic and facts do, if not more.
This is actually an old problem. The value of free speech as a utilitarian goal, as opposed to being an absolute right which even the American system doesn't quite believe, is in having an unfettered marketplace of ideas. But if all ideas are equally valuable then there is no apparent value to truth and no consensus. You certainly wouldn't design a network protocol like that where nothing is allowed to converge. This sort of thing is glossed over in the regime of approximate consensus which holds most of the time in most communities, but when things get really divisive and intractable, or when there are deliberate adversarial attacks, I don't think there is a consistent philosophical resolution.
It's not the belief that is harmful per se. It would be the harassment of those affected that is harmful. To give an alternative conspiracy that was given a lot of oxygen, many believed that Bush caused 9/11. Conspiracies themselves are often wrong, but occasionally correct. I don't believe that one was correct, but for example, Edward Snowden's NSA revelations were initially derided as conspiracy. It's important that even those types of ideas be allowed to compete in the marketplace of ideas.
The problem I see isn't so much that people harbor conspiracies, but that they do so without interacting with the world at large. They do so in a bubble, and in a bubble, a conspiracy can be amplified. This is how extremism develops.
The core problem is that people aren't interacting with enough different types of people in order to moderate their own behavior.
> What do you say to those people? How do you solve that problem?
That's not a problem that needs to be "solved." It's not necessary to force everyone to think alike, or believe the same things. It's not necessary - or desirable - to try to force everyone to never think/believe mean things or unhappy thoughts.
It's similar to saying: how do you solve people that believe in ghosts or god or unicorns (particularly if you're an atheist and believe that's all bunk). Consider hardcore religious people that believe most people are going to burn in hell, and or they believe that most dead people are in hell. Now ask the same question: isn't that offensive to those people that are going to supposedly burn? Isn't it offensive to the memory of the dead? Solve what, that people are allowed to believe what they want to (including offensive things)? The censorship it would require to constrain all of it would be horrific, authoritarian to the nth degree.
The world didn't end because of all the 9/11 conspiracy theories. Thousands of people died in those attacks. Yet the conspiracy theorists proclaimed that a plane never hit the Pentagon, that there were no people on the plane that crashed in PA, that all the Jews stayed home that day, and so on. That doesn't need solving, people will never stop believing and saying crazy things, you can't force it to end.
As longer duration examples, JFK and the moon landing conspiracies persist five decades on, and will never cease to exist. If NASA couldn't solve that with 50 years of education and the extraordinary amount of evidence they've provided, what hope is there for the other items.
Okay, now let's apply your example to something a bit more violent. People that believe black people are sub-human and deserve less rights.
At what point do we no longer tolerate their opinions? For example, would you believe it's bad for someone to be censored on HN because they talked about how much they hated black coders? At a certain point we deemed as a society that those sort of views are considered bad. Do you think that was a mistake?
Who decides where the line is crossed though? People believe that children deserve fewer rights as well. We all accept that. What about fewer rights for prisoners? What about fewer rights for non-citizens?
These people literally have fewer rights and we don't consider banning discussion about giving them more/taking more away.
>At what point do we no longer tolerate their opinions? For example, would you believe it's bad for someone to be censored on HN because they talked about how much they hated black coders?
False dichotomy. Nobody has suggested that free speech means that every community has to tolerate these opinions. It just means that people can't be persecuted by the government for expressing them. It's fine to ban racist comments on whatever site you run.
It's not a false dichotomy considering this entire discussion revolves around Google, specifically, censoring people. Now there is an argument to be made that Google being potentially a monopoly should be held to more stringent standards but that is not in my opinion the crux of the argument here.
Also you're essentially advocating for the position that we should be allowed to discuss whether or not black people deserve less rights here.
Facebook, Twitter, etc. have block / unfriend options. It's extremely easy to avoid sites like Daily Stormer (or whatever the infamous white supremacist site is called?).
I intentionally don't hang out with racists, bigots, etc. It's the exact same manner of avoidance. You have to do it in real life, I don't see why the same premise wouldn't hold online. Don't spend time in such places, don't invest into such people on social media. You can't force racists to not be racist, you can choose not to associate with them.
> For example, would you believe it's bad for someone to be censored on HN because they talked about how much they hated black coders?
HN isn't a monpolist platform (such that it can heavily restrict information distribution across an entire nation or more), as is the case with Google and Facebook. HN strictly policing its own site in the manner it sees fit, doesn't qualify as censorship, in my opinion. When a monopoly platform does it, it is censorship (because the condition of it being a monopoly means there are limited or no alternatives).
> At a certain point we deemed as a society that those sort of views are considered bad. Do you think that was a mistake?
Not at all. There is widely a societal punishment for such terrible views: you become an outcast in many regards, particularly among the vast majority of people. That society deems such views terrible, does not simultaneously require they be banished / made illegal / censored from all publication (whether books or social media).
That cultural battle should in fact occur out in the open. There is no better platform for it than that. You can't nearly so well combat terrible ideas if they're not expressed.
When it comes to how a monopoly like Facebook should deal with it (in a context where one believes all people have a right to be able to use it): they should delete illegal posts (underage pornography as one example), and they should perhaps restrict blatantly offensive content to adult readers only (18+ in the US), and require an acknowledgement to view it. There are a few directions that Facebook could go with dealing with Alex Jones types for example, that doesn't involve a heavy handed censorship or ban. Limit the mass distribution of their content, as we might in a public square. We don't generally block people from having one to one, or one to few, offensive conversations in a public square, assuming it's discrete. We do generally stop them from mass distribution to all by shouting offensive things through a large audio setup or similar in public squares. It's an effective means of not denying someone access to public squares (eg banning them from Facebook), while still not turning a platform into an open amplification vehicle for terrible, abusive, offensive ideas.
What we're all really talking about here is: should people be censored from being able to offend other people. That central concept is what connects all of these varied speech discussions covering race, religion, sex, gender, political ideology, et al. Should offensiveness be banned? It can't be done, the result of trying to do it is entirely predictable ahead of time. We're seeing countries like Britain attempt it, it's a grotesque absurdity in result. Since what's offensive is inherently subjective and will always vary from one person to the next, it becomes a system of who has power to dictate from moment to moment what's offensive. It becomes a competition of an ever tightening restriction, as each power group adds items to the list (culminating in authoritarianism as freedom of speech essentially entirely disappears). The quest to banish offensiveness ('the right to not be offended'), is a fool's errand at best, and an authoritarian's dream at worst.
If you want to be a member of many of these platforms, and you're a target of that offensive material, you do have to tolerate it. The racists use the platforms tools to find you and hurl it at you, and then the platform gives you little to no recourse to even ignore them outside of not using the platform.
Your advice only works when you're not the target - when you just need to walk past and avert your eyes from what's being said to someone else.
HN is doing “filtering” more than “censorship” - there’s a setting that lets you see flagged comments (greyed out), so the filtering is almost completely transparent, it cannot really be called censorship.
This is a valid criticism. Let's say something truly unsavory was said about a valued relative, etc. It's hurtful. But, I think it's less hurtful then not being able to have dissident ideas expressed. For examples, we have Russia and China. They will censor topics using "civility" and whatever else in their arsenal to crush dissent. Personally I think that is worse.
Well, it usually starts with a genuine desire for civility, but as they say, power corrupts and once you give power over speech to censors (and you see this even on moderated boards), they tend to abuse their power for other purposes. It's not a problem because of repressive societies, they are the symptoms. It's a problem with human nature.
Nobody has said anything about hurt feelings. They are speaking about the families of children killed at Sandy Hook, and the maleficent antagonists who harass them with lies.
Is being physically hurt the bar now? How about living in fear and the extreme inconvenience with moving to different residences out of fear?
Also, for example:
>In December 2016, Lucy Richards, a 57-year-old woman from Tampa, was charged with four counts of transmitting threats in interstate commerce for sending death threats to Lenny Pozner, whose son Noah was the youngest of 20 children murdered
Imagine losing your child and then having to put up with people like that who are fueled by someone trying to get rich off low information people. We as a society can and should do better.
You know, it feels bad, to get physically hurt. To imply, the only damage, that can exist, would be of material nature (here: the meat of your body) gives me the shivers.
Yeah I agree. But the crucial difference is, physical/meat damage is objective, whereas emotional pain is only subjective (you can't prove it to anyone else, and you can also make it up). Basing policy on subjective whims is bad.
The "reasonable person" is a well-established principle in law, used to test whether someone's subjective claims of harm should be considered valid from an objective viewpoint.
Few would question that a "reasonable person" whose child was killed in a mass murder would be harmed by harassment, abuse, and claims the event didn't happen.
I'm not normally any kind of bleeding heart, but stuff like this isn't difficult.
> There are people who believe free speech is not the ultimate value
A right to free speech is a recent, American-centric invention, rather than a natural cornerstone of democracy, a fact that often seems lost on Americans.
399BC Socrates speaks to jury at his trial: 'If you offered to let me off this time on condition I am not any longer to speak my mind... I should say to you, "Men of Athens, I shall obey the Gods rather than you."'
The enumeration of free speech as a natural right in a Constitution is an American invention, but the idea itself is much older than that. However, the freedom of speech and press is as important to free nations as an immune system is to a body...neither will lost long without it.
Slide 66 is particularly interesting, and articulates an observation that I've made:
> Tech firms are performing a balancing act between two
incompatible positions
> 100% commit to the American tradition that prioritises free speech for democracy, not civility
> 100% commit to the European tradition that favors dignity over liberty, and civility over freedom
It is a very American view that unrestricted freedom of speech is a requirement for a well-functioning democracy, and that any restriction of this beyond censoring direct calls to violence is evil.
As a European I have to admit that the American model is vastly superior. Europe had laws to protect "civilty" even before the great wars. American democracy is also more successful in general.
Europe just had the "luck" that censorship didn't become very necessary. In some places it was used extensively and those places are no more today.
And lastly, stripping someones voice implicates directly stripping someones dignity.
As a fellow European I have to disagree. Hiostorically speaking, pernal freedoms have been alot higher in Europe than the US (since the time something like personal freedom became athing), slavery was abolished by Great Britain and France way before the War of Independance for example. Also, where exactly is the US democracy more successfull overall? We do have free election in Europe, don't we? And we don't have such things like aelectoral college, not even in Germany where we don't elect the Chancellor directly (that the separation of legislative, executive and judicative powers works better in the US than in Germany with regards to the first two the excepotion proofing the rule).
Cesnrship was used extensivle by every European nation during WW1 and WW2, France convicted journalists of treason during wWW1 and Britain was very restrictive as well during WW2. Both places still exist.
Again, no one is stripping me of my voice in Europe. With the notable exception of redicals within their respective bubbles, there freedom of speech is treated as a direct threat. The existence of these bubbles and the hate speech coming from them has to be freedom speech on the other hand. But hey, extremist always want it both ways. Logic thinking isn't theirs strength, self dilusion is much more comfortable, isn't it?
You can pick and choose freedoms that you value or don't, and say Europe has more freedom. I can go the other way. Here is one: In much of Europe, you have nothing resembling the protection that Americans have under the 2nd amendment.
You have "free election" of approved parties. The other ones are subject to arrest for their political expression. Right now in the UK, Britain First is facing trial for what would be protected political speech in the USA. LePen got charged in France, also for what would be protected political speech in the USA. Geert Wilders, a member of the House of Representatives of the Netherlands, was likewise put on trial.
If you have a parliament that chooses a prime minister, then you have an electoral college. The members happen to be the same as those of congress; that is what a parliament is. It's actually worse, because you don't have a solid understanding of who they might choose.
In your last paragraph you admit that you don't really have free speech, but you don't care because your own views happen to be in favor with the current government. It's all fine to censor people you dislike, and to label them as radicals speaking hate speech. You're in a rather privileged position there, quite lucky to not be under a government that labels YOU in that way.
Just one thing, is the electoral college part of Congress and / or the Senate? If not, the systems are not comparable. And I have the impression that you seriously missunderstood my last paragraph which referred to radical elements thoroughly within their communities. Which, obviously, has nothing to do with constitutional free speech.
In the end, we agree that we don't agree. And for the sake of the discussion it is better than we stop now and don't go towards the 2nd.
The electoral college (EC) is a state system connected neither to Congress nor the Senate. Rather the EC is apportioned representatives based on the total number of Congressmen and Senators a state has. These representatives are chosen by (and vote at the behest of) the individual states according to how those states choose. All but two states require their EC representatives to vote for the Presidential candidate who won their state vote.
Thanks for the answer. I knew that, and yes the German syatem has a tendency to blur the lines between legslative and executive branches.
My comment was in reply to a comment that equated a parliament electing a chancellor / prime minister with an electoral college. Despite all the downsides of not electing the head of government directly they still are different things.
You have to seprete the function frok the body, this is a very relevant separation. e.g. the head of state in Germany has close to zero actual power while France has a very powerfull presidential system. The only thing comming close to electoral college in Germany is electing the Head of State. Both, candidate and electors are caeefully selected by both chambers of parliament. Personnaly I'm not happy with that, parties tend to have way to much power in the German system.
All the Bundesrat has in common with thr Senate is being a second chamber. It is not elected directly as a body. Instead the prime ministers of ech state are making it up. So it is voted for indirectly during state elections. Votes in the Bundesrat are than based population, more or less (if someone knows the details that'll be cool). coalition governments different to the one on the federal level usually abstain (e.g. parties A and B govern federal, C and D being the opposition while parties B and D govern a certain state that state will note vote on certain issues).
In a democracy, these details do actually matter a lot. Otherwise you won't be able to ditinguish democracies on paper from real ones.
Just listening to Death in June on Google Play Music, thanks for the music tip, it's always nice to listen to non-mainstream stuff, radio in Germany can otherwise become a little bit repetetive.
Not that a game like Manhunt is actually my taste, Borderlands is more my cup of cake, but since I checked I have question: Is it true that, despite being available in the US, it is really hard to actually get? With Wal-Mart not wanting to sell it for a time and Sony as well as Nintendo having had an issue with the un-cut version of it?
Please excuse me that I don't try to voice state-threatening opinions which aren't mine just for fun. But maybe you have experience with that?
I disagree. If with "American" you mean the US, then their democratic system is very weak. It constantly gets bought out by lobbyists (but then, maybe it's just more obvious, more visible than here in EU), they only have two parties, who, judging with a European eye, the spectrum of politics is very narrow. There is much less open discussion, the political shows are more often depending on the personality of the showmaster, rather than the educated opinion of the part-takers. I am not saying, that doesn't exist, but the tendency to make everything into a product is much higher. And thus much corrupter.
And with good reason, e.g. mob rule. (As an American I hope this remains so.)
> bought out by lobbyists
On all sides. Goldman Sachs has their man in Washington, as does Planned Parenthood.
> only have two parties
America has multiple parties, but two are the most ascendant. And the reason for this is more complicated than the US Constitution, e.g. writing state laws that a party must have so many members to qualify for automatic enrollment on a ballot.
> the spectrum of politics is very narrow
Americans aware of European politics view Europe much the same way, that is, even the 'far right' of Europe more or less equals the 'center left to center right' of the United States.
> less open discussion
In a sense this is true. American media companies, for example, are dominated by non-Conservatives (the 'center right to far right' of the US) and so they choose their subjects, guests, talking points, etc., from amongst those they favor, i.e. non-Conservative sources. There is much more open discussion on the Internet but, as this post shows, even Big Tech is in favor of shutting down certain (non-Progressive) viewpoints they disagree with.
> On all sides. Goldman Sachs has their man in Washington, as does Planned Parenthood.
I might be misinterpreting you here but it sounds as if you're saying that all is well and good since all lobby organisations are represented in Washington. The problem to me with this idea would be that not all of the people are represented by all of the lobby organisations. Which makes the system not even a mob rule, but simply a plutocracy.
I would say that striving to be 'all is well and good' is a never-ending chore. (As in never will end.)
> since all lobby organizations are represented in Washington.
Not all, but many. There are also many lobbying firms local to states and cities.
Can we do with less lobbying? Sure. But we'll only ever be able to go so far given that organized petitions are guaranteed by the First Amendment '... the right of the people peaceably to assemble, and to petition the government for a redress of grievances.' This can take the form of a single person, a parade of protestors, a lobbying firm, or what have you. Which also means a single person with deep pockets (e.g. actor Kevin Costner) can have more sway than a lobbying outfit (e.g. a law firm representing the Lakota Sioux of the Black Hills.)
> not all of the people are represented by all of the lobby organizations.
Agreed.
I would go even further and say not all of the people are represented by those who actually vote or those who actually occupy office. Undocumented migrants, children, the mentally challenged, jailed felons, those who don't vote whatsoever—all are supposedly represented by Congress and the President, but who really gave Congress and the President the right to represent those who didn't vote for them? (Joking here—many in the US feel that Trump does NOT represent them, despite him being President. Not joking here—but are they right?)
> plutocracy
It's close. Given the lobbying arms of Planned Parenthood, the Teamsters, the Southern Baptist Convention, the National Audubon Society, and many, many (, many) others, that is, the lobbying arms of well-monied groups, we see that organized groups can act as wealthy individuals. So it's a mix of plutocracy and monied non-profits.
> youtube
Good video, and it captures the big-picture problem with American political lobbying. I wish them luck with the solution they've offered—currently, marijuana laws are undergoing the same course that women's suffrage did, so it works as long as people are driven to finish what they started. (Which is not necessarily a good thing—US Prohibition also was a state matter before finally being made an Amendment, the 18th, the one that came right before giving women the right to vote. Check https://en.wikipedia.org/wiki/Dry_state .)
> It is a very American view that unrestricted freedom of speech is a requirement for a well-functioning democracy
It is similarly a very American view that unrestricted freedom of speech is a requirement for a well-functioning internet (well, not completely "unrestricted", but we're not here to argue nuance). Avoiding the obvious debate on which is better, the problem of choice exists in a global medium. Lest it become balkanized, you will have to choose an approach both as a company and as a set of laws. Wrt laws, restrictions are added much more often than they are removed so we should probably err on the side of fewer/limited-scope restrictions. I think most would prefer the greatest common factor of freedoms vs the lowest common denominator of restrictions.
They would prefer it as a way of lowering the cost of doing business. However, as Google has shown with Dragonfly, this attitude is likely to encourage a race to the bottom with the most oppressive regimes.
I don't see any incompatibility in the real world.
In Europe you are free, unless you cause other people harm. If you cause other people harm, it must be decided, whether that was justified (self-defense) or not (criminal offense).
I would say, that in the US the term "freedom" is more liberally used to make a "criminal-offense" look like "self-defense", though I am a bit cynical now.
Dignity means also, that you can speak your mind freely.
Yes, talking about it is. Restricting it at the government level isn't. Freedom of speech is a right protecting you from the government, not companies.
Are you confusing the american first amendment with the concept of freedom of speech? Freedom of Speech is a much broader concept then just the first amendment.
It's interesting that this document calls Arab Spring "the high point in positivity" of the internet. From what I understand, most countries are even worse off now. Libya in particular is still stuck in civil war to this day.
The Arab Spring was about standing up to the oppressive regimes. This might be why they referred to as "the high point in positivity". But as you noted, this in turn created a massive power vacuum followed by seemingly endless chaos and instability in the region. However, most people are ignorant of it because it's seldom discussed in the media. Why it's not discussed in the media then becomes a political debate.
If you go back and read articles from that time, you'll find that Twitter, Facebook, VPNs, and other technology was used to help organize grass roots rebellions against dictators.
Obviously it didn't make the middle east and northern africa a bastion of democracy like western viewers may have hoped, but no doubt that many people tried to make their countries a better place through democratic revolution. Just because they didn't succeed 100%, doesn't mean that the revolution was a mistake.
I think that’s the point. Arab spring, the chance of positive change around the world because of it, pre Snowden(!).
It’s not that things weren’t awful then just people didn’t know what was going on or how things would turn out, so we were all upbeat about the internet.
I disagree. People know how awful their lives are by their well being. The internet offered people the ability to organize and topple their corrupt government, but that doesn't mean that the new people in charge are going to be better.
Definitely. The awfulness that I implied was meant to be mass surveillance. That people could organize and resist was great, needs to be way more of that, not less.
Still, that moment has passed so people aren’t positive about our connectedness in the same way cause it doesn’t seem to be causing any progressive/democratic changes.
Because real centrists like google know all that Libya stuff was a right wing conspiracy theory to prevent hillary from running for president /s
In all seriousness, it's quite abhorrent how foreign policy interventions by progressive leaders are shoved down the memory hole while we are always wide awake when a conservative does something.
Are you suggesting that Libya should be remembered along the same lines as the Iraq and Afghanistan wars?
If you're being serious you should really review the last couple decades. In what world does the intervention of Clinton and Obama equal the Bushes? Even Clinton's Iraq endeavor was downplayed by the GOP at the time as a distraction from Lewinsky.
But the real reason Libya isn't in the public's forethought is simple fact we didn't invade and rebuild the country. Its not hard to figure out why its not treated the same as Bush's wars.
> Are you suggesting that Libya should be remembered along the same lines as the Iraq and Afghanistan wars?
At least on some level if your concerned about the results of regime change by western powers.
> But the real reason Libya isn't in the public's forethought is simple fact we didn't invade and rebuild the country. Its not hard to figure out why its not treated the same as Bush's wars.
No, we just let them build it then we came in and destroyed it and left it militias to fight over the rubble. Why did we do that again? Because as far as i remember libya was not posing a threat to the US.
Compared to afghanistan which you could at least argue there is some kind of threat of terrorists like ISIS setting up a state that had declared war on western countries.
“Under section 230 of the Communications Decency Act,
tech firms have legal immunity from the majority of the
content posted on their platforms (unlike ‘traditional’ media publications). This protection has empowered YouTube, Facebook, Twitter and Reddit to create spaces for free speech without the fear of legal action or its financial consequence.”
Censoring content is suspiciously like editorializing content, and steps social media a lot closer than they want to being publishers and not platforms.
Proceeding with censorship seems to carry the risk of losing immunity under Section 230.
Proceeding with censorship seems to carry the risk of losing immunity under Section 230.
Nope. The whole point of Section 230 is platforms can make and enforce rules, and still have a safe harbor. Otherwise, they all would have lost their safe harbor decades ago.
The issue is that Google is now associating itself with the role of a "publisher" rather than a "platform" (in particular on slide 68). Publishers are not protected under Section 230.
This is a very lazy way to view things, considering oneself as a plattform instead of a publisher I mean. We all agree that the press by itself is free in the Western world, including the US. Sovthere is, as a direct consequence, nothing wrong with editing.
By being a platform you can basically have it both ways, compete with journalism without being journalism. And once that is causing issues you say freedom of speech and you are of the hook.
Maybe in the begining that was even true. Now, not so much anymore I guess.
We have a problem with spins in many articles that mostly results from writers instead of editors, but the overall conclusion that editing cannot be detrimental to freedom of information is wrong or insufficient at least.
Everything else was rather unsurprising, pretty much what I expected. But that was the part that I had to read again. Hoping they just copied that from a 2013 pamphlet. If they consider Libya's slave markets ( https://www.cnn.com/2017/11/14/africa/libya-migrant-auctions... ) and Sirya the high point of positivity, I'd wonder what they consider the low point.
This is a reasonable point to make as a consequentialist with 20/20 hindsight.
But it's fair to value the act of getting freedom from oppressive regimes as a good in itself independent of the power vacuum it creates. And it's fair to assess a movement based on its priors of a possible positive outcome, rather than the purely retrospectively once they haven't come to pass.
I just finished skimming the whole thing—did Google just trick Breitbart into publishing propaganda for them? (I'm reminded of the Valve "employee handbook.")
Not really. It'll have the same result as literally every other revelation in the media.
The side completely, 100% opposed to any restrictions on speech (including "policing tone" as outlined in this presentation) will see this as corporate meddling in people's expression.
The side that supports speech regulation will see this as a structured plan to curb the kind of speech that's considered "harmful" or whatnot.
This presentation doesn't contain anything outside the current ideological dichotomy, no original or unorthodox thoughts or ideas.
If they did, it was a serious own goal. For the last year, everything coming out about Google has been pretty negative, and this seems to fit the pattern.
The underlying premise of this, if real, is that these companies and their centralized platforms ought to exist. "Never expect a man to understand something his salary depends upon him not understanding" and all that.
Well, given that it's done by Google employees of course they'll want to continue to wield their power. For example, they mention "regulation" as a punch-line, IOW, the assumption is that regulation of any form is bad.
Brietbart found quite a lot to take issue with in this presentation, if you care to read their article. Some of the points you might even agree with, or lead you to take greater pause when a company like Google starts down the path of censorship.
Weighting results toward “authoritative voices”, delisting results which could damage their advertising revenue, or as they claimed in the infamous all-hands meeting, bending the curve of political discourse in the country to suit their personal world view.
Given the power inherent in Google’s near-monopoly on search, I’m personally a lot more critical of potential political or editorial influences over Google’s methods for listing, ranking, and delisting content.
> Probably not the spin that Breitbart was hoping for.
It's a leaked Google document with "Censor" in the title. It doesn't matter what it contains. Breitbart can spin it however they want. Their readership will hear what they want to hear.
dignity/civility should be a self determined value rather than one instated by authority. it is impossible to create a universal standard of conduct that will be adequate for everyone and we don't need to. you can call me a faggot or tell me to kill myself as much as you like and i really wont care at all, i want to see everything short of direct and malicious efforts to cause me real life harm (and arguably i want to know about those too). on the flip side you have older users who may be completely averse to course language or any interaction that falls beyond tv standards of etiquette, things that are relatively tame by internet standards may be completely offensive to them. how do you create a set of standards across your product that are adequate for both use cases without artificially limiting appeal to a single audience?
empower users with the tools to define their own experiences and, if they so choose, filter out what they don't want without assuming what that is. perhaps you could even let users display their version of a content rating so that others could see what content they will and will not filter rather than the communication breakdown that emerges when some messages are opaquely censored. you could even use an honorific system to better determine what is an adequate level of filtering between two users. you might not want to hear 'i'm gonna fucking kill you' from a total rando, but if it comes from your spouse you probably should. don't tell the customers what they want when you don't know the answer.
There's a strong difference between state mandated censorship and a company voluntarily censoring content on their platform.
A company has the absolute right to prohibit any content on their platform that they so choose. There's freedom not to use Google services.
Calls to regulate independent companies are purely ludicrous. If Facebook said tomorrow that any positive mention of, say, Paul Graham, would result in immediate account deletion, that'd be their choice. Very valid criticism from the perspective of a user would be warranted, but that's where it should end.
Using Google services is now required by many public schools in the US. That's where all the homework is posted, for example. Now in the US there's freedom to not attend public school, of course, so technically you have the right to not use Google services. But in practice the vast majority of people would not be able to exercise that right in this case.
Similarly, a number of schools are starting to do communications with parents purely through Facebook. And I expect this trend will keep getting worse before it gets better.
> Calls to regulate independent companies are purely ludicrous.
If they're really independent companies, yes. If they're government-sponsored monopolies, on the other hand, the calculus is quite different. And we're quite close to that line, if not over it.
You could argue that the right solution is for governments to not enshrine these companies in these monopoly positions. That would be lovely, obviously. Too bad the companies are spending all this money to get into those positions (see Google Classroom).
I'd find a private school / homeschool argument disingenuous, so I'm with you that far.
Google Classroom is a fair point (schools also use Google Docs and the like as well), but I don't believe it holds water.
Governments would be well within their rights not to award contracts to, or use the services of, companies they felt were not respecting the values of free speech. The difference between this and active regulation are that one is voluntary for the company (must obey this term for eligibility for extra reward) and one is mandatory (must obey this term or be punished by the law).
I don't have much to say about communicating to parents through Facebook, other than that they should still offer optional email communication.
For what it's worth, I don't understand why you're being downvoted... :( I just upvoted you.
I agree that the optimal solution would be for local governments to not rely on Google like this. What's not clear to me is what the best course of action is in the world in which they _are_.
I think having "no propaganda aimed at elementary school students" regulation may be the only way to solve the coordination problem here, but of course that has the obvious flaw of it being hard to agree on a definition of "propaganda"...
This may be true for companies that are not monopolies (arguable, though). However, Google is clearly a monopoly and once you're a monopoly, many activities that are fine for other companies are no longer fine.
Also note that fascism is about political control of all the things. It doesn't mean than the government runs all the things, but controls all the things. With Google in bed with a single party, and exercising its power on behalf of that party, we're in a very dangerous situation.
The most annoying part of this document is the watermark. It seems like a cogent summary of different ideas in the space - and doesn't advocate for "good censorship" (I think the title of the deck is provocative and unhelpful).
Here in 2018, I'm not that worried about Google's position as world's information arbiter and how it chooses to censor content. But I do worry a bit, as the path to hell is paved with good intentions. How will this evolve over the decades to come?
Where does the line between "hate speech" and legitimate criticism get drawn? Criticism is often crude. Humor is often crude. Sometimes humor, even crude humor, cuts to the core of an issue better than any intellectual discussion or essay could possibly yield. The U.S. itself has a long tradition of pointed, sarcastic political cartoons, as an example.
We need not reach back too far in memory to find an example of grey area between "hate speech" and "free speech": The Dutch cartoonist Kurt Westergaard and his infamous Muhammad bomb cartoon.
I think Google and other social media giants will find themselves in an impossible situation, if they haven't already. To be a good censor is to declare the "rightness" and "wrongness" of content in a consistent manner. However, in order to do so, you have to stake a position.
However, these companies sprawl too far and too wide to stake a position without alienating huge swaths of the population. And without making it all too easy for factions to believe that their side is being discriminated against.
>Here in 2018, I'm not that worried about Google's position as world's information arbiter and how it chooses to censor content.
Why, if I might ask?
"Google" has entered the common lexicon as a generic term for "search the internet for something". The amount of power they wield, ignoring all other power aside from the ability to rank the search results of most internet users, is massive and I think this cannot be overstated or minimized.
This is a non-fallacious version of the slippery slope. Just because you're okay with the "do the right thing" nee "don't be evil" Google of today does not mean you'll be okay with the Google of tomorrow having that power. Consider very carefully whether you want any one particular company to have that power, and what the remedies are if Google ever does go full evil.
On a side note, it absolutely reeks of double standards on many of the commenters here to castigate Google for their Chinese censorship while giving them a pass for what is described herein. Censorship is only bad when done at the behest of a state? I don't understand this sentiment.
I want no third party deciding for me what is "hate speech" and deciding I shouldn't be able to see it as a result. With a special emphasis on the second part of that phrase.
Hate speech is free speech too. So there's no grey area, legally that is (remember "God hates fags" ruling?). The rest is just censorship, and most of it is political these days. Twitter is not even hiding it, they openly ban conservatives. And it's legal, since these are private companies. Just don't consider them reliable sources of unfiltered information.
The presentation mentions "global inconsistency" as a problem, but I disagree. Communities have different community standards, so they should have different moderation.
As an example, it is rather unfortunate that American obscenity standard is enforced against the world at large.
Censorship alone is not evil. The problem is, once you have infrastructure in place to make it easy, it will be abused. And if that infrastructure is centralized, the stakes for abuse are much higher. It's the same old problem of "who regulates the regulators". Rather than the issue being about "censorship bad / free speech good", I think the heart of it is: There's simply a lack of confidence that tech giants like Google are part of a larger system with checks and balances, and they appear to be making unilateral decisions behind closed doors.
Restricting access to information generally is until a good reason is given. That could be the protection of personal information for example. But without justification I would heavily disagree with that statement.
There seems to be justification though. If a "news" website is constantly publishing stories that are objectively false, or based on some factual event but distorted beyond any verifiable aspect, then Google has the right (and I'd argue the duty) to remove that content from its platform, especially if that content is targeted at individuals.
It would be good for them to be more honest and transparent, and this leak is a nice step in that direction :-)
But to be serious, something like "We take these positions and stand behind these values and we're proud of it" would seem better than claim they are a neutral platform and they welcome all points of view and let users create and share whatever content.
This didn't even have to be a leaked document. They should have posted it in their "about" section right on the front page. If anyone doesn't like it they can go and make their own Google and share content there instead.
There was the leaked company meeting video after the 2016 elections with people crying and saying "we lost" and then users are supposed to believe those executives will turn around, wipe their tears, walk back to their desks and be unbiased when it comes to moderating news, search results, Youtube videos, charities they sponsor, etc? That's probably unrealistic... So why not drop the pretense and come out and be proud of what they support. Nobody will be surprised and many will welcome it including most of their employees.
I think there is a massive market opportunity here, and all the competitor has to do is build Google from 5-10 years ago - when you could actually still get decent results for your search, warts and all - as opposed to the wildly unrepresentative kindergarten/pollyanna picture of the internet they currently return.
Has anyone else ever wondered how some of the numbers about internet compares to a real world? Like,
* "2.6 million tweets contained anti-Semitic speech during the US presidential election" - how many conversations there where, in real word, about the same topic and contained anti-Semitic speech?
* "26% of American users are victims of internet trolling" - how many are victims of trolling in real word? Or in schools?
* "40% of internet users have been harassed online" - how many persons have been harassed at work places? Bars?
* Governments under cyber attack? How many of them are targets of espionage? Or have many companies are targets of espionage?
I'm not saying that these are not bad things, nor that those shouldn't be addressed - I'm just wondering if these things are as bad as they might seem to be, when compared to "the real world".
I took that to be the point -- we tend to be at our worst online, thanks to the unique combination of anonymity (I can say whatever I want with as many accounts as I want, without meaningful consequences) and absence of social cues (there's nobody around me, so I have no incentive to soften my language or weaken my opinions).
In other words, the way we talk online is not normal. Even outside of the free speech debate, it's worth noting that.
You could also see it as the inverse. Online we talk as we really are, saying what we really think, and thus the quality and depth of the discussion will inevitably be higher and deeper than in the 'real world' where we are faced with relentless groupthink, desire to conform, self-censoring to avoid unrelated career blowbacks and so on.
Moreover online people can take the time to write something good (not saying they usually do of course), whereas in-person debates usually suffer from being shallow, filled with interruptions, half baked ideas, people getting artificially upset to try and 'win' and so on.
I'd actually like to see far more experimentation and research in online commenting and discussion forums. There's really been very little movement in this space since CmdrTaco retired.
I'm a huge fan of the work you did on Slashdot and Slashcode back in the day. It's still by far the best thought out moderation system I've seen.
I'd love to see a modern respin of Slashcode that reused and enhanced the moderation ideas. Unfortunately my spare time project slots are all consumed already :(
You can see the same effects of anonymity in masses. Say, you are in a concert and the crowd starts to throw tomatoes to the artist for being bad (not that I have ever witnessed such act) - would those that threw a tomato do the same thing if they were alone and had to put a "face" for their own doings (because one can easily identify them)?
This is quite philosophical comparison, but I think its on the point.
> In what sense is what happens online not part of the real world?
This is mainly the reason why the last "real world" was in quotes. When you take statistics of something, you usually want to compare them to something. For me, a natural comparison for internet statistics is some physical places like offices, public streets, schools, bars. "a real world" - I'm not saying that the internet is not part of it.
Quite a neutral and concise briefing I think covering all sides. The unfortunate part of the debate is viewing freedom/liberty vs diginity/civility as exactly opposing forces. That these are seen as opposing forces justifies the view of resting in the middle as though it exists.
Instead of thinking so linearly along these two dimensions, I would suggest using a wider lens. Instead of striving for some of both in a common platform, we should strive for some of both concurrently. I was encouraged by the example of Twitter unverifying accounts, but not banning them, but even that suffers from the single platform effect since unverification is objectively harmful.
Those with qualms concerning unfavorable content should ask themselves whether the concern is that it's promoted/visible by default or that it exists at all. The latter is too hard line of a stand to be seen as anything but censorship and should only be applied in the most extreme cases. But the former is something we can tackle. Simply default to safe and let the user opt in to increasing levels of unsafe. You'll always have the problem of being the arbiter of what resides at which levels (granted you can ask content creators to self-categorize with threats of violation for clear miscategorization) but at least it's better than outright banning/deleting. Just make sure that whatever level the user has set, content that is visible is all treated equally under the algorithms (i.e. no demotion). I personally would turn favorability filters off and encourage others to do the same, but at least defaults exist and levels of moderation can be chosen.
Finally, to shift the rant a bit, I just want to say I welcome self-imposed solutions over legal requirements. I am encouraged to know that, while it's hard to compete with a behemoth (why would you try), you can still legally publish your own content sans filter on your own site or a site with similar views towards your own. We might prefer all the popular sites respect our views on what is allowed, but since that's obviously impossible in a global medium due to cultural subjectivity, don't attempt codify it legally.
I read the whole thing, all 85 pages of it. Based on the content, and based on how I read it, I’m now almost 100% sure, not completely, but almost sure, that it is, indeed a Breirbrat News Exclusive. I think it actually says so somewhere on the pages.
The monopolists becoming ever more aggressive censors, is the strongest argument there can be for ending their monopolies. Once a company acquires a monopoly over a critical means of public information distribution (search, social, etc), their policies of restriction formally become censorship. They've become an impediment to functioning democracy.
Freedom of speech is too important to leave to the monopolies of Google and Facebook to decide what qualifies and what doesn't.
The watermark makes it really annoying to read, but unless it gets refuted by Google, it has to be real, right? (That's how these things work these days...?)
The Breitbart article states: "Responding to the leak, an official Google source said the document should be considered internal research, and not an official company position." So I suppose that's confirmation of authenticity (sort of). The article can be found here: https://www.breitbart.com/tech/2018/10/09/the-good-censor-le...
I wonder if the same internal people who protested "project dragonfly" would object so vehemently to this "research" project as well. I would hope that they would, but I'm a bit hesitant to be sure --which is a bit frightening as it kind of would confirm people can hold two opposing views simultaneously as long as it fits their framework.
They're definitely not the most reputable news source and many of their opinion pieces definitely step into fake territory, but I think outright falsifying an entire document like this would be a very extreme move, even for them.
Why? It's no different from the logo on a TV network news video feed. It's just a news agency trying to make things painful for competitors trying to steal a story without credit.
If not for Breitbart being censored (fitting this story...) by Hacker News, the original article might have been linked:
In a nutshell: Google, Twitter and Facebook followed a very liberal definiton of what they are, created a platform to promote free speech to get free content from users. Being a platform, they did nopt have to edit content like print media has to. This created traffic and drove ad revenue. And everything was fine.
Then some bad stuff happened, advertisers, not users, complained by withdrawing money. And who would have guessed, that was an issue. The situation these companies re now in sucks, either they start editing (which is not censorship) and risk being treated like traditional media eliminating their competitive advantage. Or they don't and risk loosing ad revenue.
IMHO the underlying reason is that free speech and free press offline is lot easier because there are a ton of different outlet, all with a somwhet limited reach. People like Rupper Murdoch already put these limitations to a serious test. Online it's much worse with a couple of global de-facto oligopols. I wouldn't be surprised if societies and governments are moving to regulate that one day.
That being said, the fact that all these companies, leaning apparently heavy on " 100% commit to the American tradition
that prioritises free speech for democracy, not civility", are at the same time more than happy to ignore that stance in certain countries to maintain a global foot print (p.47 of the Google doc comes to mind). It seems, after all, that money is king and everything else secondary.
Admittedly, FB is much worse that Google in that regard.
EDIT: All of the above points are more or less cited from Googles own document. Except the part with e oligopol which is my own conclusion.
I can understand why Google, as an engine, should err towards neutrality of information, without arbitrating even hideous content.
But I've never really understand why social networks can't, or shouldn't, say "This is the type of community we are, and what content we find acceptable."
I caution against latching onto the title and making assumptions. The bulk of the content is analyzing where the current censorship position came from and the conclusion is for Google to be more open with their stance and also be more equal with applying it.
As for the why:
Why the shift toward censorship?
- User Demands
- before, In the absence of rules, bad behaviour thrived
- now, Appease users, maintain platform loyalty
- Government Demands
- before, Governments were unhappy to cede power to corporations
- now, Respond to regulatory demands, maintain global expansion
- Commercial Demands
- before, It’s impossible to neutrally promote content and info
- now, Monetize content through its organisation, increase revenues
- before, Advertisers were wary of unintended placement and endorsement
- now, Protect advertisers from controversial content, increase revenues
As for the conclusions:
Don’t take sides
Police tone instead of content
Enforce standards and policies clearly
Justify global positions
Explain the technology
Improve communications
Take problems seriously
Positive guidelines
Better signposts
Most of the document is non-controversial. There are places where it can be seen as obviously left leaning, but there are also places where it acknowledges that sometimes the right has been treated worse:
“[Richard] Spencer doesn't get to be a verified speaker; Milo gets kicked off, but I know
plenty of pretty abusive feminist users or left wing users,
expressing themselves in exactly the same way
that the right is being penalised for,
who are permitted
to perform certain kinds of speech. That’s going to get Twitter into
I don't know where you got "left leaning" from unless you're bending over backwards to be fair, they are if anything centrists which makes sense since they're Google. For heaven's sake, the slide about conversations of free speech, I see establishment centrist media pieces on the same plot as Breitbart and WorldNetDaily. An equivalent would be listing the Young Turks or alternet.
On the two pages titled "But recent global events have undermined this utopian narrative" there are a total of 8 events depicted. The viewer is expected to think that the events are problems. About 4 or 5 of the events would be considered good by many on the right, but bad by the left. The remaining events are just gross, weird, and otherwise unrelated to the US political spectrum.
Yeah, honestly I don't think Breitbart and similar can really play this as negative for Google when it acknowledges many actions have been taken even against right wingers unfairly and encourages "Don't take sides" as a top point.
The biggest thing to complain about would probably be the title... they couldn't have picked a worse title.
The negative for Google isn't what the briefing suggests, it is that it documents bias, it notes the impossible tension between the US and European values, and says that Google has been leaning away from US values. I think that's what Breitbart would be primarily reacting to.
There's a larger question that Breitbart is raising: should we tolerate Google - or any company - as a monopolist with censorship power over information distribution. In this case, they're an extraordinarily powerful, expansive monopolist at that. Who decides the degree to which they censor, who gets censored, what information gets censored, etc. Google isn't just a company policing its own platform. Imagine Microsoft aggressively censoring what people could browse, upload to the Internet or write on its operating system in ~2001 (or any time during peak Windows monopoly): it'd be an obvious monopoly abuse, harm to consumers, and as an extension of that monopoly position a form of censorship. Google, through the leverage of its multiple monopolies (search, YouTube, Android), is pushing toward that sort of behavior.
I was going to say that they're not really doing it on Android, but then I remembered that both Apple and Google banned the free speech absolutist social network Gab's official app.
At least on Android you can still install it without Google's consent if you so choose. Overall though what you're saying still has more validity to it than I'd like to admit.
All these "let's be more inclusive by focussing on what/how we can exclude" activities these days are really scary. You can't be inclusive by being exclusive.
Got downvotes but I can't explain why the conspiracy that trump is a russian agent is not a conspiracy theory that should be censored and it would be nice for someone to explain how that works.
Today I learnt OogleBot is ignoring the robots.txt file on my private Gitea server:
git[dot]habd.as/robots.txt
A quick search for "after dark habdas" reveals the second of two rules in the robots file, which explicitly blocks Oogle at the site root, is clearly being ignored.
Of course I created this rule before putting this server online and Andex is the only webmaster console I registered this site with. But that's besides the point. There's obviously a lack of respect for the robots file or a gross error in OogleBot as far as I can tell.
I think the thing that is missing from the presentation is that Google is positioning itself to take a tone policing position, but that completely ignores damage to actual people. People injured by the powerful and the state will be angry, will go viral, and will likely violate tone policed policies. People that drop white nationalist (genocidal) rhetoric with a smile get to push their case. Furthermore, the economics of the situation mean that Google et al are structurally incapable of reforms that dilute their power or reduce engagement. The alternative that preserves their power is to exert overt censorship as they will do in China. Thankfully, we are not there yet in the US.
Tone policing generally favors knocking down the left anyway, which is favorable to elites whom are the targets of their ire.
All this shows is that Google has become a zombie company. It is more interested in building moats and pleasing politicos to solidify their position than in actually doing useful things.
James Bridle argues convincingly that the genre of bizarre YouTube videos which appeals to the toddler reptilian brain ( https://medium.com/@jamesbridle/something-is-wrong-on-the-in... ) is not created by hostile or evil actors but instead has evolved orgnically based on what stuff toddlers want to click on. Kids' click patterns reward more video themes like "Elsa tied up on train tracks kissing Spiderman", so the content industry crams more of that stuff into its new content.
The result, after a few iterations, would not have passed editorial controls at 1990s Nickelodeon (!), which would normally have halted the feedback loop, but with no one at the helm -- to "censor" or otherwise exert editorial control -- YouTube's kid-targeted videos are just a whole forest of weird.
Does YouTube want to allow their platform to become a laboratory for rapidly discovering local maxima in very young children's fantasy worlds? Do they have any choice? Should they step in and publish rules for what children's content is allowed? Should they hire some kind of human curator or editor to enforce those rules for child-focused videos? Should Web platforms act in loco parentis?
In this cases, in "the Peppa Pig scandal" style situation, the producers are machine-generating content that gets clicks and the consumers are children.
When the issue is the viral proliferation of "fake news" and hate speech, the content producers are people or state propaganda apparatuses, and the consumers & re-sharers are grown adults.
It seems like it's a different topic with maybe different guiding principles to decide how & whether to censor these different groups of consumers & producers.