Hacker News new | past | comments | ask | show | jobs | submit login
Facebook failed to block 20% of uploaded New Zealand shooter videos (techcrunch.com)
123 points by sahin on March 17, 2019 | hide | past | favorite | 347 comments



I think censorship is not going to fix the issue, people have the right to see and face evil if they so wish to and have a strong heart.

I am from NZ and a muslim and have watched the video myself an hour after the incident, I know people personally affected by this and I think we all need to come together as human beings and understand that there are some evil ideas and people everywhere around the world.

And that's the challenge we face that wouldn't go away because of censorship and putting our heads in the sand.


>> I think censorship is not going to fix the issue, people have the right to see and face evil if they so wish to and have a strong heart.

It's the inadvertent exposure that's more of a problem. The access is just so easy. If you were browsing at the time it was happening, you almost couldn't not watch the video because it was so prevelant.

I'm imagining a 16 year old coming across it, and wondering how they process it.

I made a conscious choice/effort not to watch it and 99% succeeded. But there are moments I couldn't miss because they were everywhere.

By the way, that's not burying my head in the sand. It's more about respecting the dead and not needing to see it to understand the darkness in some people.


>I'm imagining a 16 year old coming across it, and wondering how they process it.

I saw ISIS propaganda videos at around the age of 16. The ones where they kill people in various ways. Nothing else in life has made me disgusted by other people as much as those videos did. I'm reminded of the videos when I read about ISIS. It strengthens my view that ISIS is a group that needs to be fought.


I'm curious, on what sites/services did you find the video so prevalent it was hard to avoid?


Clips were autoplaying in the live updates tab of twitter.


I'm a heavy user of the Twitter web and iPhone clients, and I'm not sure what you mean by "live updates" tab. Is that something in TweetDeck or another client?

If you (or parent poster ~TheTruth1234) found this unwanted, wouldn't unfollowing or blocking the source accounts be an adequate remedy?

(It is a embarrassing mess, but par for Twitter's lackadaisical approach software quality, that Twitter web often ignores their own settings' "don't autoplay" toggle, and has for many months.)


If you need to ask me that ... I congratulate you on your restrained use of social media.


I honestly consider myself somebody who uses social media perhaps too much, and yet I still didn't see anything like a video at all - I don't know if it's a social media bubble, or perhaps a feature of the country I'm in. I saw the news that the attack had happened almost instantly, but not even on places like reddit did I see videos of the event.


Are you saying that the video was being posted to social media as "shock videos" (ie. the viewer did not know that it was violent before watching it)? If so, why is censorship needed, rather than a NSFW/NSFL label?


I'm saying the video was readily available as "news".

I don't care how it's dealt with, but I prefer it not to be a part of my regular online diet.


So, how does having it be "available as 'news'", behind an intentional click or two for those who feel they want or even "need" to know, affect your online diet?

Can't you just pass it over without clicking, like with the hundreds/thousands of other offered-but-unwanted links one encounters in daily web/app usage?


"Intentional" ....

It was beneath the top trending hashtag on twitter. Initially, it was absolutely not obvious what the video was that kept appearing. Sometimes it would auto play.

Obviously rapidly it became clear what was going on ... and then it was a basic game to escape ... and then to just switch off social media.

Honestly ... there are some f@cking weird questions asked about this subject on HN.

Do you mind if I ask your age and general location? I'm really curious.


Thanks for the appreciation, but still wondering which specific sites/services presented you with images/video you didn't want to see.

(I'm a pretty heavy user of social media, especially Twitter & Facebook, but as none of my friends/followees/groups posted raw video, I only saw the headlines and generic 'aftermath' photos common on mainstream news sites.)


In the future, would you please add an additional sentence to comments like this one where you answer the question?


You think you understand, but until you really see something or someone go from living and breath to abruptly dead, you don't.

As for the 16 year old, it wasn't long ago that by 16 you'd have seen a few of your siblings and friends succumbed to illness or accident.

Death used to be part of life and we dealt with it.


I'm not saying we need to be in denial of death. I'm well versed, thanks.

But that doesn't mean I need or want to tune into the live-action progress of a disgusting, terrorist attack.


And I'm not saying it should be played in time square. But hiding it from the "normal" parts of the internet doesn't do anyone any favors.


What about the fact the he wanted the video viewed and his crap to be read? Not participating in something he wants is (another) motivation for avoiding his creations.


He's a terrorist, he intended to terrorize. Looking him in the eye and telling him to fuck off is the best thing you can do.


Should we assume that any view plays into the aims of the attack, or is there a way to view this footage that thwarts its goals? If there is a way, what can we do to increase the number of people who use it (given that they view the footage at all?)


If I open my browser, pop on to twitter, and the video is right there waiting under the top trending topic ... it might as well be Time Square.


So click "hide" and carry on.

Or Twitter can implement a "do not autoplay sensitive content" feature.


I wish they didn't autoplay anything.


I think you can turn that off in your account settings, actually (it may be called something like "data saver" or the like)


The fact that browsers, websites, and apps autoplay video, often with sound, doesn't help.

I've posted a ... friendly notice to Google about this.


A million times this.

Every public interaction you make is an excuse for Facebook to shove it in the face of your friends. If you see this and angry-like it, it pops up on a few hundred more feeds. Rinse, repeat. Viral.

Facebook is particularly bad because their moderation team appears to be staffed exclusively by a pile of libertarian twelve year olds. I've flagged local drug sales, sexually explicit content (no, not casual or functional nudity) many times and had my flags rejected.


Why do you think the reaction would be any different from any violent action movie? If you don’t know the context, you also don’t know that those people actually died.


I did know the context.


Then it cannot be “inadvertent exposure”.


From reading a lot of the anti censorship comments popping up around FBs activities here, it feels like it is actually partly fueled by the deplatforming actions social networks have been taking here in the US, which have been political, and very anti-free speech.

It doesn’t all seem purely about this case of whether a social network should be required to distribute propaganda videos of mass murder.

Personally I don’t get why people are okay with distribution of child porn being illegal, but think that distribution of mass murder videos shouldn’t be illegal AND ALSO now apparently must be required of any internet company.


It’s simply the matter of where we draw the line. Child porn is nearly universally considered past the line. Bestiality is predominantly considered to be past the line. Videos on how to make weapons or learn self defence / martial arts (all mechanisms by which violence can be inflicted) are placed past the line or not past it in the laws in many parts of the world. Hateful and vitriolic rants will be illegal in some places and legal in others.

Is it really so hard to understand that some people draw the line between the two things you mentioned? Especially when those advocating it not be illegal, may consider the video of mass murder to be news, or audiovisual documentation worthy of preserving as evidence of the events that have transpired.


> Is it really so hard to understand that some people draw the line between the two things you mentioned?

I know that people draw a line, that’s implicit from me saying that people make the distinction between those two things. What I said was personally I don’t understand why they draw that line.

> news, or audiovisual documentation worthy of preserving as evidence of the events that have transpired.

They can archive and preserve it themselves, or using services that want to facilitate that, and be subject to whatever local laws may permit or prohibit that. As you said laws vary by jurisdiction.

The anger at FB for not wanting to be party to disseminating this video of a horrific crime really is hard to understand.


I really don’t see this as an issue of censorship but protecting rights. These are not footages from public places. Churches, mosques and other places of worship are not public places for the most part. In addition, Facebook is a commercial platform where they make money based on user activities.


The video is being distributed in whatsapp groups in countries where education level of people is pretty low. This is going to be propaganda for more violence in the world.

World is complicated and no censorship doesn't work when a society has 50% that never went to school.


So we instead of spending more money on education in those places, censor the shit out of everything remotely questionable.

That sounds like a good long term solution!


I have not seen the video. I can't say I want to either. I can see how censorship would discourage others from repeating the atrocity. Someone else was talking about how scary it was the events that took place occurred so calmly, and with such ease. I don't believe most people can face watching that without negative consequences for their own mental health.


You put your finger on a key issue, which is consent. Many people were shown this film of a brutal mass murder with no warning due to autoplaying video on news or social media sites.


> I think censorship is not going to fix the issue, people have the right to see and face evil if they so wish to and have a strong heart.

I'd like to pentest that. Here are some real and hypothetical cases that should be easier to defend:

* thousands of people access scientific journal articles through scihub

* thousands of people copy books from an illegal mirror of the corpus of digital books from UC libraries that Google scanned for Google books

* millions of people sharing copyrighted digital music and videos

Do you also publicly defend without reservation the behavior of the people in my bullet points above on free speech grounds? Should such users and publishers be defended in their endeavors on free speech grounds?


It’s strange to ask that on HN of all places, because it’s known that loads of people here are for all 3 points. Plenty of people here are against copyright and patents entirely, even.


As a user, Yes. As a publisher no.


Absolutely.


I think anyone who seeks out - or worse, redistributes - this video, is complicit in furthering the propagandistic aims of the attacker.


Just as anyone who censors it is complicit in eroding human rights?


No.


> people have the right to see and face evil if they so wish to and have a strong heart.

Not on a site someone else owns and operates they don’t.


other put another way the owner of the site has the right to not show the face of evil.


Depends on the jurisdiction too I’d think.


By that logic we should stop censoring child porn, because it doesn't solve the issue by itself.


I can tell you after hours of meditating on it it is intrinsicly even disgusting to watch and morally fucked and you need to visit a therapist.

Edit: isn't that hard to find because of Bing article on it https://techcrunch.com/2019/01/10/unsafe-search/


What could be classified as hate speech more than a self videoed murderous rampage? Why the fuck should that be given a platform? Go search the dank corners of the internet for that shit if your "strong heart" desires it. What sick mind desires to give this piece of shit a bigger soap box?


I think if I was one of the people murdered, I would want others to see what kind of terror I went through. A video triggers emotions in people that they don't experience when they just hear about another unfortunate statistic. Maybe that would inspire more people to put effort into preventing these situations.


Why does me seeing a murderous rampage equate with me somehow having the ability to affect such events? Like I'm somehow indifferent to this until I see it and now I'm imbued with the will and ability to make it stop? How does me seeing this give me the ability to make it stop? I think this is terroristic and shouldn't be given a platform because that is exactly what this sick piece of shit wants.


So, you would prefer never to hear or even know about events for which you can not directly affect change?


Do you feel shame for your poor reading comprehension? I said nothing even close to that. I was well aware of these events and do not have any objection to being aware, but I didn't need to witness the murderers video. I speak only towards giving platform to terrorist videos. It seems like there is this idiotic idea that gore perpetrated by the filmer should be granted global freedom of speech and platform providers are infringing on this universal right. Which is dumb as fuck because such rights don't exist nor should they. Why do you get up in arms when these total shitheads aren't getting red carpet treatment?


Maybe you can go ask these people being murdered on video for the world to see if they consent to the broadcast of their death.

Oh wait.

Your belief that you'd be ok with it or even find that a positive outcome means pretty much nothing unless you're one of the people on that video.


The video of the 757s hitting the WTC isn't purged from the internet, why should this video be?

The world is dark and full of terrors (but getting better every day), and sticking our heads in the sand doesn't change that.


I'm a defender of free speech and am completely against actions like Reddit banning /r/watchpeopledie. But here's the response from Liveleak which (somewhat surprisingly) I found clarified my own thoughts on the matter: https://www.liveleak.com/view?t=uSQT_1552831454

Relevant quote: "Previously we have risked the ire of viewers by refusing to show the glossy promo videos for ISIS. The ones shot on DSLR's, with depth of field, careful framing, and murders stage managed for the camera. Whilst we may carry media showing the most terrible of events we don't want to be a vehicle of choice for those who carry these events out. This is a very important distinction for us."


Agreed. There's a difference between propaganda and documentation.

Christchurch is grainy helmet cam. Not highly produced propaganda.


Just because it is poorly produced doesn't make it documentation or rule out it being propaganda.

I've no interest in seeing the video, but I understand they marked the ammunition clips with messages seen on camera. Several have been quoted in reporting. That sounds very like a deliberate attempt to make a propaganda piece of the footage. I would hope that Liveleak would rule out carrying it on the same justification.

Phone cam footage from a victim or innocent bystander is far easier to consider documentation, and make a case it should be seen.


Difference is presumably in the intention, not the video quality.


True. But it's functional no different than police bodycam footage. It is a documentation of the events as they unfolded.

It does not have a message. It doesn't function as propaganda.


> But it's functional no different than police bodycam footage.

Perfect example, but not of what you're claiming. Police bodycam footage is frequently released in partial, edited form as propaganda. When it doesn't appear to exonerate the cops, police departments often fight its release.


Police body cam serves to protect the citizens and police. I guess in the same way, this video will surely serve to get this guy locked up for however long is possible in NZ.

But the video has a message and is part of a message that he wanted to send by the act itself. The message is his manifesto that was published along side with it. Also the guns in the video have a message. It's hard to read, but that was solved by news media that gladly published it in its entirety.


> It does not have a message. It doesn't function as propaganda.

What an absurd thing to say about a terrorists propaganda video.


The whole god damn thing was his propaganda!

Why did he film it? why publish a rant? why write things on his guns?

to try to push his shitville excuse of a view. To embolden other shitists. Do sow fear and anger into the non-shit.


Sure, it's propaganda, but you have to understand your enemy to defeat them.

I'm not interested in this video or the man's opinions.

But I strongly believe that the footage should be available so people can see just how horrifying it was.

If you want people to take terrorism seriously, they need to really really understand how terrifying it is. Names and numbers don't do that. Watching another human go from walking and talking to stone cold dead in an instant does do that.


Frankly if one can't empathise with the horror of a roomful of people being massacred with a rifle purely on plain language description of the event, and instead need the full motion video, in my view such a person is perilously close to lacking any human empathy at all.


I've changed my mind jimjimjim. I accept the argument that it is propaganda.

But to me, it's utility as documentation of a tragedy outweighs it's utility as propaganda.


what?! I was responding to your post saying it wasn't propaganda!

explain yourself!


I've changed my mind jimjimjim. I accept the argument that it is propaganda.

But to me, it's utility as documentation of a tragedy outweighs it's utility as propaganda.


Grainy helmet cam can be highly produced propaganda. It seems fairly clear the guy had a social media strategy worked out, of which a live stream was an integral part.


The WTC footage is documentation of a terrorist attack by innocent bystanders. The Christchurch Mosque shooting footage was created by the terrorist with the aim of becoming viral. It is a component of the terrorist attack itself.


I hear this argument a lot. Obviously, the perpetrator would prefer that the video spread widely and inspire similar acts of violence. But that is not the only possible outcome of allowing the video to spread and be seen.

Humans are largely visual creatures. If we don’t witness something, we have a tendency to treat it as less ‘real’. Doubly so when it happens half a world away: atrocities and their victims become almost theoretical.

Actually witnessing a video record changes our perception dramatically. The victims cease to be faceless ‘others’, and we receive a somber reminder that a maniac slaughtered husbands, wives, sons, and daughters—real people that could be your friends, family, or neighbors. The sense of tragedy and the casual brutality of it sinks in, and we are forced to acknowledge the evil among us.

In the U.S., where so many of these tech companies reside, we have become so bitterly polarized that many among us demonize nearly half our fellow countrymen on the basis of their political affiliations. At times like this, we could be coming together in our mutual disgust over this atrocity. Seeing it for ourselves is, perhaps, less important than seeing that those we condemn politically react with the same humanity that we do.

In a time when some people seem to honestly believe that “those [other half of the country] want to destroy everything everything I care about” we could use a reminder that, hey, we’re all people, and none of us deserve to die like that. And maybe then we could start working together to build a world that isn’t so conducive to hate.


The 9/11 attackers wanted attention, too, and they got it. Don't tell me that 9/11 wasn't carried out with the express intent to "go viral".


Indeed. If you want to kill the most Americans, just sell them high fructose cheeseburgers for 99 cents. You probably won't even go to prison!

All the talk about social media is just a distraction for the real issues that politicians don't want to face: gun laws. We probably shouldn't be selling crazy people weapons of mass destruction. But sure, blame Facebook for letting people post videos...


I’m not quite sure how to articulate it, but there is another difference too. The Christchurch attack is more personal - it’s showing individuals. The most shocking part of the WTC attacks (for me) were the people jumping, and that was the part where you could see them as people.


If no one had recorded anything, but a few months later Al-Qaeda released ground-level videos of the attacks happening, would we ban them then?


Facebook already bans ISIS propaganda videos, so it's a small leap to assume that they'll also ban Al-Qaeda released videos as well.


They're not illegal, but many sites ban them - I tried to find an unedited copy of the ISIS video with them dropping grenades from a drone a few years back... you often wind up having to go into the same niche news sites, gore sites, or torrent sites which you do to find this video. My options were pretty much 8chan itself or "BestGore" when I looked.

I'll also note that ISIS does a lot more editing and commentary to slant the content in their favor than this guy did. About all this guy did was make meme references... I can see an argument that this is quite different.


That would entirely hinge on the degree to which the content was editorialised. If the video contains commentary and/or editorialising and/or editing that in any way implied that the occurrence was desirable or righteous, then it is terrorist propaganda and should be restricted without hesitation.

If it was an opinion-free, propaganda-free raw unedited video of a newsworthy event that was filmed in a public place, then it probably shouldn't be banned—though publishers should still act responsibly if the material contains deeply offensive or distressing material, or if it has serious privacy implications.

And that's just a proposed set of principles I came up with after three minutes of consideration—I'm sure with an extra 15 minutes we could come up with something even better. Point is, this isn't difficult. The fact that there are no bright clear lines doesn't preclude us as a society from drawing a line somewhere.


This. I do agree that there should be a warning about the content, but trying to block it from viewing at all is showing a dark pattern of Google, Facebook, etc. and Governments being able to dictate what your eyes can and can't see.


NZ goes even further, threatening 10 years prison for a “possession of a video”. Kiwis have gone bananas.

https://www.zerohedge.com/news/2019-03-16/nz-threatens-10-ye...


What law makes possession of the video illegal? The post mentioned "possession of objectionable material" is illegal, but that sounds way to vague.


NZ and Australia have censorship laws. Needless to say LGBTQ texts were censored far more than straight ones.


Vagueness doctrine is an American innovation, as far as I'm aware.


Was curious to see if it drew from somewhere like British common law, but didn't see it in the Wikipedia article:

https://en.wikipedia.org/wiki/Vagueness_doctrine

> The following pronouncement of the void for vagueness doctrine was made by Justice Sutherland in Connally v. General Construction Co., 269 U.S. 385, 391 (1926):

> [T]he terms of a penal statute [...] must be sufficiently explicit to inform those who are subject to it what conduct on their part will render them liable to its penalties… and a statute which either forbids or requires the doing of an act in terms so vague that men of common intelligence must necessarily guess at its meaning and differ as to its application violates the first essential of due process of law.


... and for about the millionth, time I'm glad the US has that pesky First Amendment.


When did zerohedge become so...bad. The comments on that thread are atrocious. It wasn’t always like that.


ZeroHedge's comments have been appalling for several years. They made a switch some time ago so that they don't show by default. Frankly I don't see why they don't simply ditch their comments section, but that's there business not mine.


around 2011, insanely racist; around 2012-13, fully anti-semitic


Insanity.


It still strikes me that it can be illegal to be in possession of a device which has a certain combination of positive and negative polarities embedded in it.

Shakespeare's typing monkeys come to mind.


In the end, everything is just subatomic particles arranged in different ways, so why should anything be illegal? If you’re going to reduce one idea to its base elements, you should at least be consistent.

On the other hand, perhaps if we think about everything at a more reasonable level, we can see why some digital things should be illegal, just as some physical things should be illegal.


Illegal physical things? illegal digital things?

What's next, illegal source code?

Maybe you should just start with illegal thoughts right off the bat?


Does it also strike you as strange that it can be illegal to make certain cuts on a piece of metal? (e.g. machine gun manufacture laws that exist on nearly every corner of the planet)

I find that these kind of reductive arguments floated for digital subjects fall apart when you remove the computer from the equation.


> I find that these kind of reductive arguments floated for digital subjects fall apart when you remove the computer from the equation.

The “illegal numbers” concept is something I find interesting. Just because a certain number can be used to decrypt BluRays, it’s illegal


That only seems odd because it's a flawed intuition pump: a "number" feels like a small, innocent, abstract and singular thing, while "data" feels large and complex and rich with potential. But a "number" can be any size. A dump of Wikipedia is a "number". All information, and hence all secrets, are "numbers". The notion of an "illegal number" is just a weird way of phrasing the idea of government-enforced secrecy.

128 bits is about the threshold where a "number" stops behaving like our intuition of a number, and starts being rare enough that you won't arrive at an "illegal" number by counting or chance, i.e. it is now a "file" for all practical purposes: https://qntm.org/number


Yes actually I do think they're ridiculous for physical things too. I take a pretty libertarian lean on most things.


A living body and dead body have the same number of particles and mass, why do people make a big deal about it?


This video is fascist propaganda. It isn't neutral.

It also isn't "purged from the internet".


It's purged from the parts of internet the majority of people consider "the internet".

But here's 65+ people vaporizing from 18 different angles. On YouTube. These are not different. They are footage of a terrorist attack.

https://youtu.be/7YLm3pkAiJQ


So are you advocating these be taken down? Or you think that if the rule isn't applied equally everywhere then it shouldn't be applied at all? Don't let perfect be the enemy of good. I think its pretty fucked up to think that shoving this guys murder vid into peoples faces is somehow for the greater good.


I don't see these two pieces of footage as functional different. They are documentation of terrorist attacks. If one is allowed, then yes, all should be.


The video of the NZ attacks is not documentation. It is purposefully created propaganda by the terrorist himself. It’s not neutral. (Not that even something set up explicitly as merely documentation could ever be completely neutral itself, I’m just saying that this footage is extremely far removed from having any claim to neutrality.)

It does document the attacks, that much is true. And in that regard it is the same as footage of 9/11 – but beyond that there is a distinction here.

Don’t be confused by simplification. Do take into account context, intent and author.

(That said, I do think that you can question whether it was wise to play video of the planes flying into WTC basically on repeat during those days.)


But we still make Nazi filmed footage from their concentration camps available.

There's no other way to truly understand what happened unless you see it. Regardless of who filmed it.


With proper context, sure, I’m all in favour of that. I’m not suggesting every last copy of this video being deleted.

However, I do question the need for everyone to watch the video to understand what happened. That doesn’t really seem to follow for me.


And I don't think it should be forced in people's faces either. But it should be available for those who want to view it.


I don't think this can really be considered propaganda. It functions more to shock, insult, and potentially "brag". I think it is almost impossible that this video can garner support for white supremacist causes in anyone other than an already deeply racist and violent person. Seeing this unfold only portrays the shooter as a cowardly psychopath.


... to you and luckily most people. I think the concern is twofold: the number of individuals who will take the footage as some sort of incitement is non-negligible - as this cowardly psychopath has apparently done with the Norwegian attack some years ago. There is also the concern that it will be disseminated amongst those whose inclination is towards revenge as anti-Western propaganda.


For comparison, I can't think of anything close to this graphic being shown on the news.

I'm not sure how honest you're being with your comment but there's clearly a line where news reporting jumps from showing something to communicate the facts to something that is gratuitous + glorifying the guilty + insulting to the victims + traumatising to family and friends.

It's not easy to describe exactly where that line is but you'd have to be completely tone deaf or trolling to make such a simple comparison between these videos.


And it's a damn shame. I still remember when WikiLeaks released that video from the Apache helicopter strafing down a Reuters journalist based on some grey pixel looking like a rocket launcher, then coming back for those picking up survivors, while laughing at it all.

People need stark reminders what trillions in military funding is buying them. It's exactly that. They need to see what lax gun laws enable, what warmongering is, what war is.


>They need to see what lax gun laws enable

Doesn't NZ have strict gun laws?


Death, dismemberment and destruction used to be a part of people's lives.

They're not now and I think we don't value life as much because most of us don't know how abrupt and cold death actually is.


It's not gratuitous, it's real life. If you really really want the public to be upset enough about this to take action, they need to see just how horrifying it was.

The video of the execution of Nguyễn Văn Lém in Vietnam was huge in turning opinion even harder against war.


And if you knew the whole story you would know that guy deserved to be executed (google it). Kind of an argument against releasing material like that: The public does not have the information nor the time to interpret it correctly.


Oh I know. He was a spy and got what he deserved. But until then, the US public hadn't really seen blood spilled in Vietnam. The government and media censored and spun the hell out of the news coming home.

The best way to prevent propagandistic spin is to make sure the raw footage is available.


ISIS burning those journalists alive and the cutting the one guy's head off was pretty graphic. The sheer number of people being felled in this video is painful to see, though.


I agree with what other repliers have said, that there's a difference between footage recorded by bystanders, versus footage produced by the actual attacker, who stands to gain by having that footage disseminated.

That said, it is interesting to me how platforms are going to deal with transformative "fair use" cases. The obvious scenario is news coverage which excerpts/screenshots the video. But I've also seen clips of the raw footage being shared by Muslim/Arabic-speaking groups on Twitter, seemingly in the context of wanting to point out to their community the horrific reality they face. It being Twitter, the posts are visible to the world even if it's meant for a specific audience. I didn't bookmark the tweet, but I assume these videos got taken down. But I wonder if the users faced different sanctions. They are significantly different in situtation than those sharing the video to glorify the shooter.


I don’t get people who seem to understand that it’s morally and legally sound to censor videos of children being raped, but when they’re being murdered it’s somehow a different story.


I’ve always understood the problem with those videos is that they are sold by producers or distributors and therefore finance those horrible acts.


“Oh but your honor, I wasn’t selling the child pornographry, I was just collecting it!”

There’s a reason that isn’t a defense, and I hope you already understand why that is.


You handwave, but:

1) That probably is a valid defense in some countries

2) It is completely reasonable to ask for an articulate link between a crime and some sort of victim, otherwise the crime shouldn't exist. "It is obvious" is only for crimes with a very obvious victim.

For child pornography, I assume the link is because people might be selling it for cash which is harder to trace and collecting it might be supporting a community through, say, ad revenue. Because while it is sick, I don't see how curating a private, personal collection could feasibly cause harm to anyone and hence it seems like a waste of money to prosecute without some sort of connection to how it is profiting CP providers.

I've thought that the countries that criminalise cartoon depictions of child porn have gone too far. It may be obscene, but possession shouldn't be a crime. Literally nobody could be suffering from it.


1.) Which countries?

2.) Your opinion is at odds with the laws of virtually every country on Earth. Countries that can’t agree on things as simple as diet, or whether gay people should be allowed to live still manage to agree on this.


1) Well South Korea, Indonesia, Kazakhstan and a smattering of African countries by the look of it [0]. And the level of enforcement probably varies wildly by country.

2) I doubt you've actually read my opinion properly, it was that the link between crime and victim should be articulated on request. That is going to be a very popular opinion in the civilized world.

[0] https://en.wikipedia.org/wiki/Legality_of_child_pornography


Why single out children? Why not all videos of people getting raped? Why not all videos of people being murdered?


Why single out children? Why not all videos of people getting raped? Why not all videos of people being murdered?

I’m speaking to the usual exception “online libertarians” tend to accept, and pointing out the hypocrisy there. Of course adults being raped or murdered is equivalent, but this is a limited format for communication and my desire to cover every possible permutation is precisely 0.


That’s not logical at all. What if any of the victims in the video didn’t want the video to be disseminated? Who are you to stand in the way of their wishes and rights? So we are OK with corporate cease and desist letters on youtube but when it comes to people actually getting murdered it’s all of sudden not OK.


>What if any of the victims in the video didn’t want the video to be disseminated? Who are you to stand in the way of their wishes and rights?

What standing do they have to prevent distribution of the video?


A moral standing grounded in mutual respect we’d hope to receive ourselves? Or if you really want to be legalistic, their next of kin have the standing.


>their next of kin have the standing.

That's not what standing means[1]. At least in the US, if the video was recorded legally[2], there's nothing you can really to do prevent publication of it. The laws are stricter if you're using it for commercial purposes, but it's not the case here.

[1] https://en.wikipedia.org/wiki/Standing_(law)

[2] You could argue that the shooting was illegal, so therefore the recording was also illegal, but that's a stretch.


Well if we want to go down that road, the event was in a private place of worship and not a public landmark. And from my understanding (I don’t know since I refuse to and didn’t watch the video) there were minors in the footage. Last but not least, Facebook specifically sells ads on their pages. So, yeah, people cannot post and disseminate these videos even if the event occurred here in the US. In addition, the footages being disseminated are technically illegal copies.


I was bitten by a sibling comment, so I'll preface this with: IANAL, this applies to US, but could apply to NZ as well. Feel free to correct me (with sources).

>the event was in a private place of worship and not a public landmark

The relevant doctrine here would be expectation of privacy[1]. You're right that it's not a public landmark, but I argue it's still a public place considering that there are dozens of strangers around and you can come and go as you please (they don't seem to check membership), thus you don't really have any expectation that it's private.

[1] https://en.wikipedia.org/wiki/Expectation_of_privacy

>there were minors in the footage

There is a legal doctrine that gives special treatment to minors?

>Last but not least, Facebook specifically sells ads on their pages

This could work, although you could argue that the poster himself isn't making any money so it's okay. Also, if this did work, I'd expect way more public freakout videos to be taken down off video platforms by their subjects, which doesn't seem to be the case right now. At the very least you could sidestep this issue by hosting it offsite or via torrent/ipfs.

>In addition, the footages being disseminated are technically illegal copies.

This is only true because the author hasn't yet given explicit permission for people to use it. He's alive in jail right now, so this could easily be rendered moot by him proclaiming (via a lawyer) that he releases the video to the public domain (which I'm sure he would do). Even if he didn't give permission (eg. he got killed), copyright infringement is a civil matter, so only the author (or his estate) can go after violators. The state sure wouldn't be able to send takedown requests using this method.


Under US law none of that matters - whether it occurred in a private place or whether minors were involved doesn’t matter. Even the shooter’s copyright claim, if he wanted to exercise it, would be irrelevant because of the news reporting aspect.


I’m not sure why you’re bringing the US into this when it’s about a crime that took place in NZ, and their requests for cooperation.


>I’m not sure why you’re bringing the US into this when it’s about a crime that took place in NZ

because I'm not a new zealand lawyer, but both US and new zealand uses common law, so I'm using that as a point of reference. I'm not trying to say "it's x in the US therefore it's x in new zealand". If you can find an authoritative source that confirms what the legal status is, feel free to post it.

>their requests for cooperation.

The people posting/sharing those videos aren't going to comply with nicely worded "requests for cooperation". If they want something taken down, they'll need assistance of the legal system.


There’s thousands of porn and prank videos online that don’t get taken down.


> That’s not logical at all

Yes it is.

> What if any of the victims in the video didn’t want the video to be disseminated?

They don't get to decide that.

> So we are OK with corporate cease and desist letters on youtube.

No


> The video of the 757s hitting the WTC isn't purged from the internet, why should this video be?

That's different though. This video has been released by the perpetrator. Do you think beheadings videos released by ISIS should be easily accessed by anyone on Facebook?


I think they should. Hiding reality from people does not do any good.

Peta wouldn't release "inside the factory farm" footage if it was an effective way to galvanize an audience.


Then you'd just turn these social media platforms into the best recruiting tool for terrorist groups of all kinds with complete control of the narrative.


No, if you're going to be recruited by this, you're already fucked in the head.

But showing this to your aunt who's wishy washy about gun control will probably disgust her into action.


Can you elaborate on your apparently implicit assertion that the decision of major social media platforms to ban the content constitutes "sticking our heads in the sand" or an attempt to get it "purged from the internet"? That's such a black-and-white leap to me that I'm baffled.

Edit: Or perhaps the "implicit" part of your post is that you're referring to the ISP incident in NZ, and not Facebook?


I do mean the media platforms.

That 757 video was played on repeat for weeks on all of the major channels in 2001. Hundreds of people turning into a fireball over and over.

It made it real for us.

By hiding the video from ChristChurch, people have to imagine how awful the event was. And they'll fall back to the stylized Hollywood notions of violence as a reference point. There are no heroic close-ups in this footage (I found a copy).

If you want people to get upset about this and take action, you show them the truth. It would repulse almost everyone.

So, since a very very small part of Internet consumers venture out of the big social and old media sites, by not hosting it there they will never see it.


My main complaint is that mainstream media reporting kept referring to the existence and specifics of the video whilst mainstream internet platforms are running around trying to make sure it's not available in any remotedly proper context.

Streisand effect notwithstanding, this is effectively tempting everyone with an ounce of curiosity to seek out the dark corners of the internet that they may have never had a reason to visit before.


The shooting in Christchurch is a tragedy that should not have happened.

That said, this massive wave of censorship across MSM, NZ ISPs, regardless of the motivation, is very disturbing. It will be inevitably used to suppress citizens' freedoms, it will cause more tension in society that will escalate into more tragical incidents. Also, the video may serve someone to study modus operandi of such an attacker to increase their chances of survival in a similar situation. Either way, the censorship and attempts at rewriting history (removal of Spacey, Jackson), again banning inconvenient books are extremely dangerous and slippery slope to get on.


Can I ask you: how can I, who does not at all want to see any graphic images, avoid these without censorship? It's not fair to allow these things in the public field where we might see disturbing things without wanting to. Videos and images must be tagged, and therefore given a rating so that I can self-censor.


It's called choosing your friends and information sources.

Cause I haven't seen the video once. I haven't been offered to neither.

You have to take some responsibility at some point. Always asking the rest of the world to do good by you is only going to lead you so far.


Frankly, not my problem. That's your problem.

Today it is you who says we gotta censor what you find offensive, tomorrow President Orange-Nimrod will censor all of CNN because it's OK now to censor things that somebody/anybody finds offensive, and in a few decades time President Jenna Bush Hager will censor all depictions of an Earth that isn't flat because those are offensive to the good god-fearing citizens who are true to the word of the Bible.


The flipside being that you are making the argument that private entities should not be allowed to choose what content they host. You worry about flat earthers censoring non-flat earth depictions, but a much more realistic threat is flat-earthers forcing their insanity to have a larger share of voice than they should by strong arming platforms and people that know better into giving them a microphone, because were they not to do so, they would be accused of "censorship".

No one is saying the NZ video should be purged from the internet. But it is foolish to equate censorship with private social media companies scrubbing it from their sites. You are welcome to search out other venues that host the video, or host it yourself. Your rights are not being infringed upon if Twitter and Facebook won't.


Well, I didn't make this argument specifically here, and my answer would be: it depends.

Common carriers, like Telstra (largest mobile Australian ISP), started blocking websites hosting the video. That's not OK for an ISP to do. Same as it isn't OK for e.g. a utility company to cut off power to a "nazi" (whatever that means these days) or to Bernie Sanders supporters because the utility CEO doesn't like "socialism".

Private entities actually hosting the content would be a different matter, and the decision should be up to the private entity. Unless such a private entity is a de facto common carrier in itself. I'd argue Facebook, twitter, reddit, google e.g. are de facto common carriers running general purpose platforms - given their size, influence and reach - even according to their own PR. They are analogous to a utility company, and saying "you can host the content on your own server" is like saying "you can just run your own generator for your electricity needs/use solar panels and battery" or like saying "you can always create your own telephone network if you don't like what the companies are doing; all it takes is 2 old cans and a string".

In my opinion common carrier and de facto common carriers should not be allowed to outright censor lawful content. Videos of newsworthy terrorist attacks, no matter if it is made by a terrorist (e.g. this one) or just an innocent bystander (e.g. 9/11, Las Vegas), are not illegal and should not be allowed to be censored by such de facto common carriers/de facto utility companies.


I usually turn off entertainment that is dis-pleasuring me.

Do you need everything to be labeled and rated by someone else before consumption?


Yes. This is the entire purpose of film rating systems like the well-known MPAA system. People want to be informed about the content of a film before watching it so that they can decide whether it is appropriate for them (or their children, or their friends suffering from PTS) to watch.


I would happily support tagging of such things to allow people to choose to not see things. It's choosing to not allow other people to see things that's problematic.


> Can I ask you: how can I, who does not at all want to see any graphic images, avoid these without censorship?

Don't search for and click on videos of the shooting? Sure, it won't stop someone specifically trying to get you to watch the video. But censorship doesn't change that. No amount of censorship can keep people from, say, adding some noise to the video to circumvent any filters and sending it to you.


There are services that don't moderate (censor) content. For the most part they are shit-holes and most people stay away from them.

If you want to make it so that no site owner can moderate their own site, you are going to have a huge uphill battle. And I can't imagine why you would want that kind of world.


You can choose to skip posts, block users from view, close the tab with that disturbing website, not to view the "breaking news mosque shooting" video. Do you seriously ask for a censorship of media, social or other, just so you do not get disturbed by a video that you chose to watch?


It isn't censorship. Free speech does not equate to Twitter/Facebook/Google having an obligation to host any/all content someone might post on their site.


Tailor your environment to suit your sensibilities. Nobody is prying open your eyes and showing this to you and it is and has been perfectly possible to not see this video anywhere.


The censorship is also sprouting up conspiracy theories, claiming that they're taking down the video because there's something to hide in it compared to the official reporting.

This video is historical fact, the most direct record of an important terrible event that had wide-ranging effect and became global news. Its accessibility helps squash rumors about what people think happened or didn't happen. Combined with the manifesto, it also shows why it happened.


But if (most) people don't know, "nothing goes wrong", doesn't it ? And surely that means we can ignore the problems.

If ever there was an apt description of youth services, or the justice system. And yes, I get it, in both cases, it merely describes the terrible crimes and abuses happening in 99% of those systems. For the remaining 1% of cases, they are necessary.


It's not totally censored and you don't go to jail for sharing it or anything like that - it's just making it harder to get and a bit more obscure which I don't find that worrying. It took me 5 mins to find it for someone else - I didn't actually want to watch it personally.


Someone is apparently in NZ court for sharing the video as it is now deemed objectionable material.

http://www.scoop.co.nz/stories/PO1903/S00171/sharing-shootin...

I can personally see the logic of this. This is not a video from an innocent bystander, but a video from a murderer designed to perpetuate hate. I'm generally anti-censorship, but I think this is a form of hate speech and society as a whole does not benefit from its dissemination.


It is extremely telling to me that of all the comments here, most of them I've seen express concern not with the fact that 50 people were murdered by white supremacists (a number that seems to be rising year over year) but rather attempting to divert the topic into one about censorship and free speech.

I have seen zero proposed solutions or ways of combating the rise of white supremacists because so far, the market place of ideas has not been successful. You're saying that there's a massive wave of censorship, yet white supremacist materials, videos and activity has never been more open and easy to access.


> I have seen zero proposed solutions or ways of combating the rise of white supremacists

No, you have only missed them. The solution is exactly free speech and more of it. These days people are often suppressed when they voice a concern that touches certain topics, i.e. immigration, religion and particularly Islam, anti-semitism, rapid changes of demographics and many more. Media oppression and flat out censorship do not make a stop to the opinions they try to suppress. They just drive them away, to hidden forums, to non-mainstream platforms, to dark web.

So instead of shunning white supremacists from media - tolerate them on and challenge them with arguments, explain how they are wrong, educate them, ridicule their nonsense; but never censor them.


> So instead of shunning white supremacists from media - tolerate them on and challenge them with arguments, explain how they are wrong, educate them, ridicule their nonsense; but never censor them.

This takes the mistaken position that social media is a good tool for discussion and debate, and we have plenty of evidence to the contrary at this point.

This is also the same thought process that leads to ideologies like climate change-denialism, evolution-denialism, anti-vaxxing, etc. to be given oxygen when they shouldn't. It assumes that by shining the light on these topics, the debate should result in people rejecting them, but in reality it simply gives them more credibility. We live in a post truth era, where people don't use the internet to debate and agree on what is write and wrong and rethink our opinions, but to form bubbles with people who agree with what we want to believe, which we leave occasionally to scream at each other, and then retreat to in order to insulate ourselves from reality.

The people who are driven to non-mainstream forums such as 8chan that comprise the scum of the internet are already fairly radical in their thinking. That's why they seek out those sites. But non-radical thinkers may be susceptible to radicalization by giving a platform on a mainstream site like Twitter or Youtube to idealogies like white supremacy, militant Islam, or violent misogyny.


Ask Neville Chamberlin how appeasement turned out.

You will never have a debate in good faith with a white supremacist, as it's an inherently irrational position. Don't waste your breath on them.


No, they're not often 'suppressed'. Again, nowadays it is far easier to access white supremacist material than possibly any other time. It's one search, one twitter account, one youtube recommendation away. These are people not operating on hidden forums at all, they're out in the open espousing these views.

I keep seeing people repeat this view, but I see none of it. You're attempting to divert away from my point in that you don't propose an actual solution at all. You propose nothing except maintaining the status quo. Arguing against a white supremacist gives legitimacy to their views, because they don't argue on the same level as you or I. They knowingly make illogical arguments that you simply cannot refute and they don't care even if you try.


I feel like the best way to counter apparently illegitimate arguments is by bettering the argument for your stance, whatever it may be. If you can't argue your stance, how can it be defended legitimately?


He was a nationalist, not supremacist.

> I have seen zero proposed solutions or ways of combating the rise of white supremacists because so far, the market place of ideas has not been successful.

The moderate nationalists like Taylor have been purged from common media. He is equated to the exact same as the shooter.

After a Muslim terrorist attack we all call for moderate Muslims to help engage in dialog to talk radical ones out of violence. But in the case of nationalist terrorism that is no where to be seen.

If you ever listen to interviews of former nationalists, dialog is what got the to change their position. Not once did no platforming or throwing bottles at them work. There was an interview with one on CNN last week about this.


> the comments here, most of them I've seen express concern not with the fact that 50 people were murdered by white supremacists (a number that seems to be rising year over year) but rather attempting to divert the topic into one about censorship and free speech.

The original topic (the headline) was about censorship

> Facebook failed to block 20% of uploaded New Zealand shooter videos

and it's what the comments addressed. People were commenting about the topic that was being addressed. They weren't diverting anything; you were.


You know what, this is a valid point. Before this incident I mostly filed away the re-rise of fascism as "just another American piece of weirdness" that would more or less correct itself given another five years. Perhaps now that we see similar problems in other countries it is time to investigate where the feelings of discontent that are giving rise to violent action are coming from on such a grand scale. On a global level, it's definitely not clear to me.


I had the same thought and it makes me sad.

No suggestions, or ways to help.

This is meant to be a meeting place of intelligent, creative, resourceful people.

Instead of just saying "it's too difficult, lets do nothing" why not think of answers.


I know people on here will disagree with this, but I think many of these events indicate that online censorship is needed.

I believe it's becoming increasingly clear that merely allowing far-right dialogue online is essentially responsible for ease of radicalization, and that proactive censorship would be the best approach to forestall this.


>I believe it's becoming increasingly clear that merely allowing far-right dialogue online is essentially responsible for ease of radicalization, and that proactive censorship would be the best approach to forestall this.

Because of course, censorship only blocks the material you personally don't want others to view and never material you personally would want others to view. I can see a lot of politicians, especially those getting elected nowadays, wanting to block any discussion of minority rights, LGTBQ+ rights, trans rights, drug reform, abortion, women's health, women's rights, climate change, non-christian religion discussions, etc, etc, etc.

edit: Note that this isn't a straw man argument. All it takes is one to look at the censorship that happened in the US in the 50s and 60s (comic code, movie/television code, mccarthyism, etc. ) to see what is possible.


You don't need to go back in history to see this. In many countries where such speech laws exist today, they're used by majority groups in order to stop minority groups from speaking out publicly about injustices. This is common in Asian countries right now.

Regardless, censorship on the internet is inherently ineffective. If this speech is banned from Facebook, it will move to other forums. If it's made illegal, it will move to the "dark web".

The only real solution to combat this sort of thinking is to debate it directly, refuting arguments rather than shutting them down entirely.


This is why my own position is not settled.

Publications, preachers, performances, etc. change people’s behaviour and beliefs — if they didn’t, nobody would advertise or campaign. That change can be ‘good’ or ‘bad’ depending on your point of view.

I want the spread of harmful memes to be slowed (and, if possible, prevented), but I also ask myself “What if I’m wrong about what is harmful?” — for example, I don’t believe in the Christian god, and have seen harm done in his name, but what if he actually does exist and banning Christian memes condemns people to hell?

Or sexuality? I know people whose sexuality has been outlawed this decade in much of western Europe, as a result of campaigns by people totally convinced that their preferences are innately harmful, which side is right?

I think solving this is also an important part of safe A.I., so it would still be a very important problem for me even if I didn’t know people directly affected.


Yes, but on the other hand, the prevalence of far-right dialogue contributed to the rise of far-right politicians. When they are in power, they use their power to block the voices for minority rights, women's rights, LGBT rights, etc. Do you think they will stop, thinking "Oh but far-right voices are still allowed, it won't be fair if I silence women's activists"?

All things considered, I'm not convinced that censoring extreme far-right contents is a long-term net negative for freedom of speech. (Well, it could be, but it's not a given.)


>"Oh but far-right voices are still allowed, it won't be fair if I silence women's activists"?

No, but they'd need to pass laws and fight court battles to enable censorship versus merely taking over existing systems of censorship. The latter is much easier to do and much harder to notice or right against. Especially when merely fighting against it is grounds for censorship.

>Yes, but on the other hand, the prevalence of far-right dialogue contributed to the rise of far-right politicians.

Depends on the scale you look at. In a sense, the far right is merely the average person's view as of 50-ish years ago. Censoring to the view of the average person merely ensures that in 50 years you'll be seen as having helped far right ideas. Or in a different way, I don't believe that my current views are the pinnacle of progressiveness (or morality or whatever) and I am loathe to prevent future progressives from being able to push their views forward.


> No, but they'd need to pass laws and fight court battles to enable censorship versus merely taking over existing systems of censorship.

I don't think that's a significant hurdle to any politician with popular support.

Look at Trump, for example, not because I hate him (well I do, but that's not the point here), but because he is a good example of what a politician can do. He rammed into the freaking Supreme Court a candidate that was accused as rapist. And there was nothing the opposition could do.

And the crazy thing is, Trump doesn't even enjoy unanimous support. Far from it.

There's not much you can do once a politician is at power, has a semblance of popular support, and is willing to forego established political norms. Laws? They will change laws, they will take their matter to courts, and they will win.

So I'm still not convinced it's a net positive to have an uncompromising legal roadblock against censorship, when the cost is to increase the chance that someone will rise to power and kick that roadblock away like a tin can.


As with any idea, people come to believe and subscribe to ideologies because they are genuinely pursuasive and they are persuasive because they contain substantial - and often meaningful/relevant - truths (if biased and omitting other relevant truths).

However, censorship is exactly the wrong thing to do.

If these pockets of knowledge are cordoned off, and inaccessible to causal observers then fewer people will know about them. There will be less people who are well versed in the ideas, and as a result fewer who are able to identify and explain their limits and context. This makes it far easier to radicalize folks.

It's also super intuitive, of course censorship increases radicalization. That's why a corrupt government censors, to have better control of and more precisely manipulate their beliefs. Something that wouldn't be possible if everyone had more information.

So yeah, I disagree.


We’re not going to forget about fascism just because we stopped giving fascist propaganda so much limelight.


Do you have any data supporting this assertion that access to and ability to freely communicate information leads to less radicalization, rather than more? I look at the Internet, and wherever there is no censorship, there is not wisdom; just hate and chaos.

4chan (and even more so, 8chan) are the pure essence of no censorship, yet everything from Holocaust denial to rampant racism and sexism thrive and fester and spread. The most persuasive ideas are those that are short, easy to digest, and blame someone who's not you for your misery. The truth, well, it can be messy and complicated, and I'm not sure you can get there without shutting down the puerile voices screaming memes of hatred.


That’s two websites. If you asked 1000 people about 4chan maybe 25% would know what is and would say it’s a troll website if they even what what it is. I think less know about 8chan. The media has driven so much traffic to the site. It’s almost as if they want people to radicalize because it will make great news. I digress just because we know what 4/8chan are I doubt most internet users do or care. we shouldn’t sign off on whole sale internet censorship because of 2 websites. Right?


You're avoiding the point. As he mentioned, 4chan and co are the epitome of no-censorship and entirely free debate. Under the assumptions made in this thread, the free market of ideas should squash ideologues and people who choose to believe in things such as white supremacy or conspiracy theories.

Yet this is not the case. Why?


Well in the US Facebook / Twitter / 4chan / 8chan are all protected by safe harbor[1]. These sites / cannot be legally held liable for content their users post.

Facebook isn’t required by law to pull down the video. They pull down the video because it’s not profitable (shareholders)

[1]https://en.wikipedia.org/wiki/Safe_harbor_(law)


You're going off on wild tangents. I didn't ask about whether or not the sites would be held liable for their content, nor did I muse on whether or not facebook is required by law to pull down their videos. I have a very specific question that I want answers for and I don't want any further misdirection.

4chan is an example of a pure free market of ideas. Yet only the most toxic of ideologies seem to survive. Why is this the case?


Would you please not argue aggressively on HN ("You're avoiding the point", "You're going off on wild tangents", " I don't want any further misdirection") regardless of how wrong someone is? It breaks the guidelines, doesn't help, and only makes this place worse.

https://news.ycombinator.com/newsguidelines.html


Bacteria only grows in environments it can survive.

You can find the same content on reddit that you find in 4chan. The difference is Condé Nast will remove subreddits because it looks bad and hurts their bottom line. 4chan doesn’t have share holders, they barely have advertisers.

Toxic ideologies survives on the Chan because there is no one worried about profit there’s no one to answer to. The entire internet use to be the same way.


[flagged]


Please don't do this here.


Do what exactly?


Please do not take HN threads on trollish flamewar tangents. It's a form of vandalism.


It's amazing how quickly people will give up fundamental rights in the name of safety.


It really deeply bothers me, I wish there was something I could do to help protect my rights from this sort of reactionary response. Does the ACLU still put up a fight in cases like this? They used to back in the day, but I'm not sure they would now...

EDIT: To make my view clear here, Facebook and others are free to remove the videos, I'd prefer if they age-gated and put a warning on them instead, but that's their choice. What I don't like is the way that that some police agencies have been suggesting that mere possession of the video is illegal.

The worst part here is that the attacker himself was suggesting that by hitting on ideas like free speech and the right to bear arms he'd be able to "accelerate" the deepening of political divides. I do not want to be put in a place where I'm forced to choose between fundamental rights and respect for entire races of people, but at this rate I fear that decision time may be approaching. Given time I hope cooler heads will prevail.


Thanks to this particular incident I've clearly seen the hypocrisy of many people regarding these matters.


Hypocrisy? Why do you assume that censorship will be used to block far-right material rather than leftist material? The right is gaining more and more political power and historically (at least in the US) is the side most willing to use censorship for it's own benefit.


I've read many people for too long opposing censorship, because it was always used by those in power for many things, from oppressing the political will of people to protecting copyright. Think of China, Russia on one side and the West on the other.

But now that some political power that we don't like is emerging, well, a lil censorship never hurt anyone huh?


I oppose censorship when the government does it. However, I fully expect to be able to boot any content that displeases me off my own website. Which is what Facebook is trying to do.

This very discussion forum is moderated, and very well as far as I can see. Do you think it would somehow be better if the mods weren't here?


Facebook is the only reliable or affordable form of internet access at all. I think the measure of "own website" changes once you reach a certain mass and especially when that "own website" is not a personal website but a corporate one. Not to mention the government influences that can and are made against these corporations.

It gets morally messy when you consider scale. At what point does it cross over from reasonable moderation to censorship? If Facebook were a country, it would the largest country in the world. Is it okay to censor two billion people because Facebook is not an "official government"?

I'm not against moderation at all but at a certain scale does it cease being only moderation? I would argue yes, but at what point? That's trickier to answer and I can't really say I know.


I recently came across the concept of a "data void" that we have these gaps in our electronic communication platforms where:

1. Platforms weight/select for content that is updated frequently and drives views.

2. Established/agreed upon information is much less likely to get that fed to you.

Consider the flat earth folks, they are churning out thousands of videos on that topic. How often are spherical earthers updating their YouTube channels? And I see similar happening with extremist communities: a flood of ideas and video that just seem unhinged and I'd rather ignore, but in doing so it furthers this gap.


> I believe it's becoming increasingly clear that merely allowing far-right dialogue online is essentially responsible for ease of radicalization, and that proactive censorship would be the best approach to forestall this.

People are going to jump on you for this, but I feel it's necessary to put this in perspective.

Ten years ago, the threat of domestic radicalization was a hot topic, except it concerned radicalization towards fundamentalist Islamic terrorism.

Twitter, Google et al. devoted resources towards identifying Islamic terrorist propaganda, accounts and networks, and then silenced them all.

Today, the threat of domestic radicalization comes from the far right. We should at least be honest and consistent when considering the threat and how we've handled such threats in the past.


Overall society is becoming safer by the year, right? Something as drastic as censorship doesn't seem to be necessary.


Personally I'm more tempted to blame AI content curation pushing people into echo chambers than a lack of censorship.

If we want less division we should be listening more to each other not less.


Oh I think it’s more about what’s being said than who is listening to who. For better or worse, human discourse is optimized for face to face interaction. It seems that widespread, anonymous speech turns even the most mild mannered individual into a raging maniac.

On a similar note I recently watched this video: https://m.youtube.com/watch?v=rE3j_RHkqJc and completely agree with its premise.


No. If anything should be disallowed, it is discussions about allowing censorship. We have quite enough historical examples of the horrible things that happen under censorship.


Interesting. Do you feel that statements of the far-right (by whatever definition you are using) are the exclusive source of incitement to hatred and violence on the internet?

Or was that just an example of one of many kids of dialogue that might be dangerous and need to be proactively censored?


I read that it was a live stream. Not sure how even technically those can be fully moderated. Probably one employee can watch up to ten streams at time but it still doesn't sound scalable enough. The cost would be too high.


No ones calling to block live streams in real time. But allowing the continued assault on the victims by rebroadcasting their last moments, without the consent of the families or the individuals is an assault.


> merely allowing far-right dialogue online is essentially responsible for ease of radicalization

The shooter was self described as an eco-fascist and claimed his motivation were two events covered widely by the mainstream media. He was very clearly "of the internet", but are you convinced that deleting "far-right dialogue" would have stopped him?

What about the current state of journalism and politics, which was also very clearly something he aimed to exploit and seems to have manipulated at a far more effective level than past attacks?


I wish we would look more at what he did. Killed 50 muslims. Not what his rambling troll "manifesto" said.


If you want it to not happen again, then you should look at the why, not the what.


He said why, in fact, he said it three times in the very first three sentences of his manifesto: "It’s the birthrates." x 3.

Where do we hear conspiracy theories about "the great replacement"[1] coming from? CNN or people like Lauren Southern[2][3]?

Or, the fact that he literally states what he wants (it's the 14 words):

> What do you want?

> We must ensure the existence of our people, and a future for white children.

[1] https://en.wikipedia.org/wiki/The_Great_Replacement_conspira...

[2] https://en.wikipedia.org/wiki/Lauren_Southern

[3] https://www.youtube.com/watch?v=OTDmsmN43NA


> The shooter was self described as an eco-fascist

Did you forget the dozen or so pages ranting about "white genocide" and "the great replacement"? What about the 49 Muslim people he murdered?


Well, bannings, also serve as a rallying call. They will put out a "Critical message from Whoever" video, and talk about how they're under attack from the media, and corporations, and how now, more than ever it is important for people who believe the same as them to spring into action (donate more to the cause), etc.

...


You realize their big argument is that they’re speaking “truth” and are thus being censored, right? When little bits of what they say slip through the cracks and people know that it’s supposed to be censored, you realize you’re validating their statements right? Make people know they’re right in one aspect, and people will think they’re right in the other aspects as well. Give yourself a legal ability to censor them, and the day they seize power (and don’t try to pretend the people you like will be in power forever, because they won’t), they will use it against you.


It is highly distressful and should give us all a big scare, when we arrived a point where you can expressly call for censorship and direct it solely to the political view opposed to yours.

If you were calling for censorship against both far right and far left, sure, I would probably disagree but would concede it was a coherence point of view.

But calling for censorship solely against one side, makes someone by definition a totalitarian.


Radicalization happens a lot quicker when groups are forced into ideological bubbles that have no pressure release valves. I feel as if we have two real choices here:

+ We fundamentally change how the Internet works and introduce an omnipotent overseer that enforces norms and behavior. We have to go all the way with this in order to reap the security benefits, half-measures - what we have today - won't work. If we do go all the way, we find ourselves living in a hellish dystopia. This is the top-down, centralized approach.

+ We realize that the only way out is through and ride the wave of rapid information propagation on the Internet, waiting for bottom-up reorganization to take place and the emergence of something new. Censorship has no place here. Extreme points of view should be allowed to diffuse-disperse by exposing them to the rest of the system. This is the decentralized, cybernetic approach.


If that decentralized approach worked, then the various chans should've never been taken over by white supremacists. Can you offer an explanation as to how that fits your model?


One reason "white supremacists" end up congregating in the various chans is because they've been banished from everywhere else. In the US, the overton window has shrank dramatically in the last decade and it keeps on shrinking.

More to the point, it is my view that events many will see as extreme will keep on happening regardless of which way we choose to go. Today, we are not dealing with increased discussion and development of new ideas (printing press). The feedback loops are getting short enough to give rise to cognitive metasystems. It follows that some blowback will occur. We are watching force projection/meatspace interaction from cognitively dissonant emergent hive minds. This is exactly what bottom-up reorganization looks like. Expect it to intensify.


You do realise that 4/8chans are full of trolls, right? They post offensive stuff to offend people like you, who take them seriously.


We have yet to have a real discussion about what we expect the largest platforms to do about content like this.

The nature of online content is that once it's created it can be copied quite easily, if Facebook was able to block 80% of this content that should be commended.

AI is not at the point where we can expect 100% of this content to be removed and if it was there would likely be some collateral damage that others would complain about.

At it's current state we need a lot more humans screening this type of content and who are we to subject other people to that type of mental disturbance?


20% not being caught is surprisingly high to me. I would have assumed the majority of the upload attempts would be byte-for-byte identical, and that even if there were a dozen minor variations of the same content, they would be able to deal with >95% automatically.


1.5 million videos uploaded implies the involvement of at least persistent script-kiddies, if not possibly state-level actors getting involved for propaganda.

This seems likely to be an arms race.


1.5 million uploads represents 0.1% of Facebook users making a single attempt each. While I expect some script efforts in general, I feel it’s an unnecessary complication in this case.


Not really. There are 2 factors that play "against" FB having nice numbers.

It takes some time between unique (or in some similarity cluster or whatever they have) content is uploaded, reported and taken down. During this time, exactly same content is not being taken down, because it's not known to be bad. So even if all the content was byte to byte identical, if 20% of it is uploaded before it is reported, there is no chance to have >95% block rate automatically.

Once content is known to be bad, people (and bad actors) will usually stop uploading it (because it no longer works). Also people stop sharing it, because it's no longer possible (and FB delete all instances of such video from the platform anyways). If there is lame script kiddie that does not handle errors, then the number could be 95%, because it would continue with upload attempts. But normal people, or sophisticated actors will respond to errors and stop uploading exactly that video.

1 extreme example: with 1 sophisticated bad actor uploading exactly same video (through some fake/hacked accounts/pages, or whatever), the block rate will always look bad for Facebook: bad actor uploads N videos before being blocked. Once blocked (on video N+1), bad actor will stop. The block rate will be here 1/(N+1), which will be way less 80% (FB's current case) for any N > 0.

What did I try to say? Percentage of what is being blocked does not make sense, since it also depend on how fast is attacker detecting that is being blocked. If fast, the number will always look bad. If attacker does not give a shit, the number will look awesome.


It seems like you could fix this (at least for exactly identical content) by generating a fingerprint for each file as it is uploaded. If a file with a given fingerprint is marked as bad, then all existing files with the same fingerprint would be removed (or maybe just recently-uploaded ones, in order to avoid removing too many legitimate files). That being said, it would probably be pretty simple to work around this by varying the content slightly on each upload.


What I said already assumes that you delete the all instances of that content when you find out it's bad. FB already does this. Problem is, that people see content before it is taken down. And once it's taken down, bad actors stop uploading it, and slightly change it (enough to make it pass existing upload filter).


80% of the videos probably did match a simple sha1 lookup and were trivially removed. Now what?


It's not an easy problem to solve if you want a system with an instant response for the poster.

But if I am a journalist and I want to get something published it will take at least a few hours to get it through an editor of some kind. FB could do this even just for video. When a user pushes something up it takes a few hours to be published. It would cost FB because some users wouldn't use it without that instant hit.


Some kind of perceptual hashing and whatever additional magic YouTube uses to detect copyrighted stuff? Once a video is flagged, it can be added to the perceptual database in a fraction of a second and all future uploads checked against it, YouTube seems to have gotten good at it.


Are those numbers from data or are they just your gut feeling?


Face detection? It's Facebook after all


80% of the content being blocked is effectively 0% of it being blocked. That’s not something to be commended. And you’re right. AI is nowhere near reliable yet companies such as FB rely on them.


This discussion is also releveant for the upload filters the european union deems necessary. If Facebook and Google (ContentID) can't do it reliably, who can?


One can find thousands of copies of the video of President Kennedy's head exploding from the bullet of an assassin. All over youtube. What's the difference?


Not on 22 November 1963, one couldn't.

The Zapruder film itself only began to circulate publicly, in a very limited fashion, in 1969, nearly six years later. The first broadcast was in 1970.

https://en.wikipedia.org/wiki/Zapruder_film


It was not created by the assassin and spread as propaganda.


"Know your enemy"


I wish so many platforms didn't shove content in your face that you don't want... there would be far less call to aggressively censor this material if so many people weren't using platforms which were designed to cram things down the user's throat.

People should be able to live their lives largely free of unwanted exposure to this stuff. We shouldn't have to outlaw people who want to investigate it in order to achieve that.


It's your own Facebook feed.


I'm having a little bit of a hard time figuring out the meaning of your comment.



It's absolutely awful. But im kind of glad it's impossible to fully filter. That's an important feature of the web. It's just going to really really suck sometimes.


For more context into Facebook's decision-making process when it comes to censorship, you might find this episode of Radiolab insightful (it was for me):

https://www.wnycstudios.org/story/post-no-evil

From what I recall: Facebook's dilemmas over how/what to censor have existed since very early in the company's history. Internally, FB developed a policy document - separate from the public one we see - which would be applied by both human and AI enforcers. The employment of those human enforcers could be considered unethical in itself (at least without providing adequate therapy). Over time, the secret policy document has grown substantially to account for a lot of edge cases. While terrorist/graphic violence is normally censored, there are provisions for allowing content that would otherwise not be seen/known if not shared on FB (e.g., content that governments seek to cover up). Still, there is considerable internal disagreement among executives at FB when it comes to handling edge cases.

The fact that this secret document (and the processes that define and apply it) has so much power in shaping public perception, politics, etc. with so little transparency is perhaps the most disturbing part of this.


All the "free speech" advocates here, do you still stand by that when it comes to child porn, or specific threats against individuals? If you do, and publicly, well... at least you're consistent. If you don't then you're already ok with some kind of censorship, and the argument is just about where the line is drawn.

I really don't like when people talk about "censorship in general" when they really mean "This is below my line for what should be censored".

If you want to make the case for no censorship at all then you need to use the most extreme examples of what will be shared and consumed because that's what will happen.


Many/most free speech advocates seem pretty happy with the way it's implemented in the US, where both child porn and threats are illegal.

I really don't like it when people take the few, narrow exceptions to free speech in the US, and try to expand them: https://www.popehat.com/2017/04/18/the-seductive-appeal-of-t...


And if people want the right censored then they’re accepting that the right or center or whoever will censor new critics the way Martin Luther King Jr, The Black Panthers, Muslim movements, Native American Rights movements, Puerto Rican Indepencence movements, and other groups fighting for the rights of their people against the establishment.

If we empower an elite to censor information or have any other power over us they will use it to further their own agendas even if that means inhibiting the freedom of millions of people.

https://en.wikipedia.org/wiki/COINTELPRO


Well there we have it then, censorship should only be in force when not doing so deprives someone of life, liberty, pursuit of happiness. To put it another way, do not deprive a person of their self, their property, or efforts of their work. Child porn when get caught under depriving children, those unable to consent legally, to their happiness, if not liberty and in some cases their life. Video threats to living people would likely fall under all three as well.

As in life, our actions should never deprive another of their rights to live and pursue their non infringing rights.you have a right to self, your property, and the efforts of your work. we balance some of the latter in order to provide a more orderly society and benefit those who cannot fully sustain themselves.

A simplistic view I admit but it is when people start having exception after exception we end up with no real liberty but a tyranny of violations. Every thing you do would bound to be covered to where even no action is a wrong.


Why block child porn but not leaked porn tapes of adults? Are they somehow less deprived?


Leaked porn tapes were filmed with consent. If they were not filmed with consent they are illegal to possess.


Yes. You should not go to jail just for having some specific sequences of ones and zeroes on a magnetic device on your home.

You should go to jail if you do actual crimes, you know, with real victims. Not imaginary victims that don't know when you see a video in the first place.


I’m sorry, but that crass treatment of “ones and zeroes” as not being part of the real world is a serious logical fallacy. It should apply across the board—if you’re a serious engineer.

Those same “ones and zeroes” could cost someone their life as much as a mal-designed truss on a bridge could. That is not a new phenomenon.

The same applies to childrens’ safety.


Ones and zeroes in my own property are not influencing anything in the "real world". If I use them and they are a virus and damage computer systems, that is a crime. If I made the virus just to learn about hacking, that should be perfectly fine.

Childrens' safety is not a digital thing in a computer, and AFAIK nothing suggests childrens' safety is more at risk right now than from other forms of violence that seem to be perfectly tolerated and celebrated in books, movies, music.

If I'm trying to make a business out of child porn, sure try to prosecute me. I believe selling other peoples' pictures without consent should be illegal in any case, specially if in demeaning circumstances. If I'm just sharing it, though disgusting it may be, the law should not stick it's nose in it. Just like it's legal to share books on how to make bombs and poison people.


> Ones and zeroes in my own property are not influencing anything in the "real world"...

So you're saying that having illegal material on your computer is not a problem because, well, it's "your property"? Like, it's fine for you to have child porn (or pirated software, or illegally downloaded copyrighted music or movies) because hey, it's all zeroes and ones, and you're not "making a business out of it"?

I empathize with the argument of some free speech advocates that it's complicated to find the right balance on where to draw the line in terms of censorship, but frankly the argument that you can do anything you want just because "it's zeroes and ones" and that is not "the real world" is plainly ridiculous. Thankfully what you propose is clearly against the law in most countries.


That is why I question any involuntary distributed file protocol.

What separates a simple file host from a complicit active member? I think at that point you're one in the same.

This matter isn't about an isolated machine; it's about a networked machine.

And to the rest of what you've said... I can't do anything but disagree with you.


While I agree with this notion generally, you're avoiding the relevant point, which is that information is not something that inherently can have no direct effect on the real world. Child porn is the usual example where you can start to make a reasonable argument that possessing it creates a real-world effect, not due to the possession itself, but due to the necessary action required to possess, which is seeking it out and acquiring it from a provider, etc etc.


Acquiring child porn doesn't have to be any more difficult than acquiring videos of beheadings and other vile acts. It was only made difficult to acquire out of puritanism, there's nothing intrinsic about it that suggests you have to commit further crimes to find it.

And if the idea is that the public is too dumb and easy to influence and therefore cannot be shown child porn or killings in New Zeland etc then it doesn't make sense at the same time for me to be advertised violent movies in the cinema.


And yet CP aficionados often seem to end up producing their own because it becomes an object of trade on certain black markets. Here's an example from just a month ago: http://www.foxla.com/news/local-news/porn-actress-director-a...

How is it that people who become interested in and start to collect CP would ever become involved in its production, in your model? Having gained access to a product they desire, those people seem to follow the same pattern as enthusiasts of any other taste, and develop an interest in creating their own material, as opposed to being satisfied with that already available.


Just out of curiosity, are you fine with AI-generated child porn?


People acquire child porn because they are the target market as consumers of it.

People, generally, watch video of a terrorist attack to be informed about threats to them or their country, a direct first amendment concern.

There is a direct connection between the consumers of child porn and its production that is absent in the other.


You are implying there is an actual industry of producing child porn and filling in quotas and receiving payment in some form, I don't think that's the case. In any case, people that watch (many) speeches of Hitler are also the target market of many businesses, and yet there is a correlation between doing so and being radicalized/extremist/dangerous, and yet it is perfectly legal.


First, there’s thousands of CP videos already there, so acquiring them doesn’t further harm anyone. Second, CP doesn’t necessarily harm children at all, it could be computer generated.


IANAL but knowingly transmitting CP is arguably "aiding & abetting" (i.e., encouraging) the "actual crimes" after the fact.


Yes but it shouldn't. Sharing some bad videos is considered bad, like child porn or NZ shooting. Sharing others is fine, like beheadings, other terror attacks, war footage. Making violent videogames is fine (see most FPS), but if they are a little more out of the norm they're bad (see a game called "Postal"). I'd like to know who decides on these arbitrary things.


You have to consider intent of sharing that content, and whether (a reasonable person would conclude) the intent promotes criminal/antisocial behavior.

CP is shared with the intent of reciprocity for more CP; hence sharing existing CP promotes future CP.

Violent/hateful content is shared with the intent to encourage/persuade others to perform similar acts, or to dehumanize the victims.


No I don't. When I share a book on how to teach people to make bombs like the Anarchy Cookbook, I don't have to consider intent at all. Why is CP different?

CP, violent and hateful content are shared for a myriad of reasons, some for profit, like the documentary on the Bataclan attack on Netflix, some for sharing with fellow pedophiles. I don't see why I can't see 9/11 footage because someone else might feel different about it.


> and whether (a reasonable person would conclude) the intent promotes criminal/antisocial behavior.

No, it should be based on actual research and statistics. For example, many supposedly reasonable people assumed that violent computer games promote criminal/antisocial behaviour, but they actually don’t.


I’m leaning more towards your side of this argument, but after looking through the archive of the thread from 8chan where NZ shooter initially announced his plan and linked his streams, these guys are begging for proof and this character is a meme that they’re cheering on. It’s sickening, scary, and quite frankly, I can’t understand how you can get to a point that they are at.

That said, the sharing of the video in that thread is something I think we can all agree is 0’s and 1’s that should be attempted to be prevented.

The media will come and share these videos for clicks and that’s a separate issue. Initially, this guy is knowingly becoming an Internet celebrity for murdering people. This part of the culture has to be addressed. I am worried this kind of meme-heavy killing is opening up a sickening can of worms.


Darn, if only there were some sort of information network that would allow people to find where things like laws and policies originate. I guess we'll never know.


Kids were killed in this attack too, unless I’m mistaken. So to add to your point, if you’re against child pornography, but not child snuff what’s the reasoning? Is it just because the social stigma around child abuse is so much stronger than murder? If so, then the child porn “exception” to free speech absolutism is arbitrary; the result of social pressure and not an ethical stance.

It seems sad to think people need social pressure to be against it, but I’m not sure how else to reconcile people who condemn the spread of one, but defend the other.


The point of child porn laws is to put perverts in jail. There are not legions of child snuff film watchers which also kill children.


We have laws against those specifics you mention under the first amendment. So yes, I would prefer the tech companies did nothing but assist law enforcement of the violations (when warranted).


So... how do you feel about the Hacker News moderation? Or say... spam filtering? Do you think those should go away too?


Moderation by censorship doesn’t sit right. Responding to comments and the upvote / downvote system seems fair enough. Spam is a tough one. As long as the user of a spam filter can modify their settings and see everything easily I’m not opposed. Perhaps we need to regulate spam / ddos in a similar way to incitement of violence.


> All the "free speech" advocates here, do you still stand by that when it comes to child porn, or specific threats against individuals?

Yes.


Hi I found your comment interesting. Yes, we have free speech forums where you can say and share anything without censorship. Yes, there are illegal things there although 99% of the content is legal. The idea is that you are in control of your own experience. You can & should blacklist entire sections of the website.

It's all peer-to-peer and onion routed. The only one who can censor things is the owner although moderators might be able to curate content on their boards (similar to what HN does). Oh, and if the owner throws away his private key then it will keep running just without new updates to the site.

People are afraid of freedom but it all works pretty well in the end.

Also, I agree with the Ayn Rand definition of free speech, i.e. the term censorship only applies to governments. When a company (such as Facebook) censors something they are really just expressing their private property rights.


I would treat child porn and specific threats against individuals just the same as I'd treat a shooting video.

Arrest the shooter, the child porn producer (unless victim==perpetrator), and the person making the threat. It is fine for others to keep and share the video, photos, text, etc.


I'm not sure that it's fine, since one would expect network effects to increase proliferations - and empirical results bear out that expectation: http://www.davidmckie.com/final-visualizations/


Well, do we want to catch people harming children, or do we just want to not be aware of it? We have an out-of-sight-out-of-mind policy right now. Reporting is discouraged, because then all your stuff would be seized as evidence and you'd be under suspicion. Lots of people probably destroy computers if they accidentally come across the stuff, since mere possession of the data is an offense.

If it spreads around, then somebody might recognize a perpetrator. I suppose we could get fancy, with a rule that you can keep it if you register it. That would make it far easier to track things back to the source and then see if anybody is getting harmed.


We have an out-of-sight-out-of-mind policy right now.

Then how am I able to provide you with data collected by law enforcement authorities demonstrating decade-long trends, along with information on the methodology employed to collect it?


That is only about possession of what is sometimes evidence of abuse, and only about people who were caught with it.

The interesting numbers are how many cases of abuse are hidden from law enforcement because:

1. people won't recognize abusers they know because they won't dare look

2. people would rather destroy a computer than deal with unfriendly police procedures

Oh, BTW, one other problem I didn't mention. The current situation is very easy to use for framing a person. This has been determined to be the case in some divorce cases, and of course an unknown of innocent people were unable to prove that they were framed.


You seem to be more interested in your hypotheticals than engaging with empirical data or learning about the rather extensive infrastructure that already exists to address the problem.


I think you missed the point that I disagree about what is and isn't a problem. I think your "increase proliferations" wording refers to people having harmless data, not causing actual abuse. It should also be clear that not all abuse is caught, and that discouraging people from reporting things is unhelpful to actual victims.


YOu're asserting that it's harmless, I'm asserting that the existence of a market creates demand for more and absolutely does lead to actual abuse. I know people who were raped as children specifically to make child pornography and the evidence of whose suffering is traded among people who enjoy watching that. Oddly enough they don't seem to share your perspective.

You haven't provided any evidence for your claim about discouraging people from reporting things. I think most police departments have a procedure for dealing with the situation where someone calls to say 'hello I bought an old laptop/hard disk/whatever and it seems to have child porn on it, please advise.'

Waht do you know, there is even an official tipline specially set up for that purpose in the USA: https://www.justice.gov/criminal-ceos/report-violations#repo...

By all means continue to explore your imagination unencumbered by facts, but don't be surprised if people who live in the real world end up drawing different conclusions from yourself.


What about the privacy rights of the children in child porn?


What about the privacy rights of Phan Thị Kim Phúc?

What about the privacy rights of so many kids on album covers? (Nevermind, Virgin Killer, Blind Faith...)

It doesn't seem that society demands such a right.


All the "non-free-speech", "speech is violence" advocates here: Violence, call for violence against others, and crimes that happen to have a speech component to them, are NOT protected by the first amendment. Threats against individuals and CP very much fall under crimes, and they have ZERO protection under free speech.

> If you want to make the case for no censorship at all then you need to use the most extreme examples of what will be shared and consumed because that's what will happen

No free speech advocate worth their salt EVER demanded anything illegal to not be censored, including threats, CP, etc.

Absolutely no one should entrust private corporations with the power to decide what is and isn't extreme, or acceptable speech. Because it will immediately be abused (as is happening right now) to censor speech that companies, mostly based in left leaning blue cities with, overwhelming employee bases leaning towards one ideology (hint: left), to spread their worldview to the REST of the world. US law sets clear definitions on free speech, and is a lot more reliable than the "moral" compass of a few individuals at companies.

For instance, the very legal and harmless "learn to code" meme was only considered offensive by a very small monitory of silicon valley Twitter censors, but anyone who tweeted that got an immediate temporary suspension.


Context is also important, I have no problems with displaying footage form Nazi experiments on concentration camp prisoners for educational purposes I could however agree that it is ok to block the same footage when it’s shared by neo-Nazis or white supremacist groups on Facebook for lolz and pride.

I also think that this is a stupid argument this footage is not illegal and can be considered by many newsworthy hence should be protected at least in some narrow context.

Child pornography on the other hand is illegal. Broadcasting or displaying child pornography has no newsworthy or any other value in any context other than the pornographic one.


For me, the line is what is legal in your area of operation. Not what is acceptable to potential advertisers.


Yes. Go after the authors of said horrible crimes, not after those in possession of a video thereof.


Why do the perpetrators of such crimes put themselves at such extreme legal risk by distributing evidence of their criminal acts? Do you think those media end up on the internet by accident, or are they intended to be shared? Would you say the audience for such content is more or less likely to originate new content than and equivalent-sized pool of non-possessors?


“Think of the children”


> All the "free speech" advocates here, do you still stand by that when it comes to child porn, or specific threats against individuals?

What if we just oppose continuously expanding the types of content that can be censored?


Sure. Except people also confuse "this is where the line is" with "this is the material that will be censored." The line is merely how much material those controlling censorship are able to censor. And nowadays a lot of very vocal right wing politicians are getting elected across the world. Allowing censorship merely moves that line and what ultimately gets censored may be very different from what you wanted or expected to get censored.


This is why we need desentralized, hard to censor platforms. Or at very least a competing platform that has different stand on the issue.


Other platforms that don't moderate already exist. Unfortunately, because they don't moderate, they are so awful most people won't use them.


I build a lot of computer vision systems and did my MSc on classifying what’s happening in video and I am VERY skeptical that it’s possible to prevent this from happening (without having a person watch every video).

You can edit a video to make it appear as safe content and it’s a cat and mouse game because you need humans to annotate those variations before a ML system can detect them (an unsupervised model will likely lead to loads of false positives).

I think the solution is better control over what we see and who we trust enough to allow them to broadcast into our brains.


It is a regrettable situation that is playing into attacker's hand.

Now a single tiny state gets to censorship worldwide? How is that not an inflammatory action?


I'm not sure how Facebook stores media assets.

But it would be stupid to duplicate a video for a shared post more than once. A one to many relationship (one video used by multiple shared posts) is more likely.

If that's the case it should be easy to delete the original video, then all of the shares loose their content.


But trolls are smart and don't share, they reupload.

Which is still easy to catch if you hash the video. But doing something like mirroring or otherwise making a small tweak to the video breaks that hash lookup. Hence 20% making it through.


The body cam video clip showed the spent casings disappearing into thin air. It was a doctored video with special effects added. I saw it and reposted it on both my Twitter and Facebook page. This video covers the discrepancies. Two men in baseball caps were also shooting. https://youtu.be/dOCwYd4ogFY


I'm thankful Hacker News doesn't have pictures or video, and it's too bad more online forums don't do the same. More places should stick to plain text. You can link if you need to.


I've done some work in the AI content moderation space and I have come to suspect that while computer vision is advanced enough to probably detect all of this, the neural nets are so big and deep that it costs over 10x what anyone is willing to pay to run, on such a large scale.

It's like looking for needles in a haystack, and it is expensive.

There is a cost-quality tradeoff with these neural nets as well, but if you go cheap, then your accuracy isn't useful either.

I don't know such a sweet spot exists today where it is both cost effective and useful.


I wouldn’t have thought that scoring video with a pre-trained model would be much more computationally expensive than other routine operations like transcoding and compression.


But it is indeed quite a bit more computationally intendive, especially for the highest quality models.


People around the office were watching it. Shockingly easy to find.


Everyone keeps saying how easy it was to find. My experience was different, eventually came across a torrent of it but it took me surprisingly long, 30 minutes or so. Maybe im bad at googling?


It was very easy to find within the first hour or so of the news, at least. I googled the name of the shooter maybe half an hour after people started tweeting it, and the first Google results were YouTube videos containing some of the footage [0]. It actually surprised me that Google's search results would put YouTube videos at the very top of SERP for a term that had an anomalous spike (because of the shooter's uncommon name). I'm less surprised that they hadn't manually blacklisted it -- I don't remember when I first saw it on Twitter, but it wasn't from news stories or an official source.

0: https://i.imgur.com/rBZcdCa.png


You have ask what sort of people would watch this ???.


The same people who gathered around TVs on 9/11 after the first strike and then saw the 2nd strike live? So most of the country?


On the morning of September 11, 2001 nobody knew there would be a second attack on the other tower. I distinctly remember everyone including myself thinking about the B52 that accidentally ran into the Empire State building. We were all at work so it wasn't like we were watching TV.

You're assuming at the time that everyone knew it was a terrorist attack, which is false.

Nobody knew that until well after the second plane hit. And even after the third plane crashed into the ground.

The only reason anyone watched TV was because it was a major event with a dramatic incident. Most of the people at the time in my town didn't turn on the TV I had to ask several times at a bar to have the TV turned on. Even then people had no idea what was going on.


No assumptions. We were gathered around the TV in the conference room after the first one hit and part of the conversation was whether it was terrorism, since it was pretty obvious you don't just happen to ram into the side of the tallest building in the country.

But your comment is a non-sequitur regardless- even once we all knew it was a terrorist attack it was replayed endlessly on TV for months, including the people throwing themselves off the building.

I haven't watched any of the footage of the New Zealand attack, but to somehow say it is exceptional to watch the footage is historically speaking, incorrect.


There's a pretty big difference between watching an event that killed people, and watching people - individuals - die.


Can you elaborate on the difference please, I’m not sure I see it myself...


Fair point, how about the people who turned on the television in the subsequent days, knowing that the video of both towers was being played on near constant repeat?


But it continues to be featured and replayed to large audiences to this day, so that excuse doesn't really add up.


Why are people reading news about such events which happen on the literal other side of the world, spending a week discussing it in their social circle?

To what end? Becoming an informed voter? Don't think so. It's entertainment, just like watching the video. It just sounds less socially acceptable.



I refuse to watch it or even seek it out, personally I think it’s disrespectful and way too morbid. But I do understand the morbid curiosity. Very strange fact of human psychology.


Not strange at all. We crave for knowledge because knowledge is extremely useful. Seeing, understanding and learning from anything and everything is how we survive, advance and evolve.

Just because we live in a society nowadays that promotes being weak doesn't mean our base instincts are not there.


People who like to verify what they are being told.


I wasn't too interested, but now I feel a duty to watch it due to the censorship. I think we all have that duty.


- Politicians

- The Police

- The Military

- Journalists

- The people who care to carefully understand a situation before forming a stance


Aspirants.


I wonder if I’ll try to upload a music video by Britney Spears, what will be the false negative rate of FB’s censorship algorithms? Will it be the same 20%?


>>Facebook failed to block 20% of uploaded New Zealand shooter videos

Fixed title: Facebook blocked 80% of uploaded New Zealand shooter videos, or 1.5 million of them.


And how is world supposed to be eaten by AI?


I am almost as saddened by some of the responses here as I was after hearing about the shooting.

Have you all become so gradually desensitized to violence that absolutely nothing is beyond acceptable?

Are YOU, personally, YOU, happy with how things are in the world?


That anyone feels the public has some "right" to watch other people die turns my stomach. I didn't grow up on 4chan, or watching far-right propaganda on Youtube, so I still see sadism as a character flaw, not a virtue.


thank you.


At any rate, as far as distribution by private companies goes, that's my view. Before someone jumps on me, I don't think government should outlaw documentation of atrocities, etc.


I recommend re-reading the comments here. Nobody is saying that what happened is acceptable, every comment here has its own reasoning for why they viewed it or why they don't think it should be censored.


Wait, have you read the responses HERE?

most of them are 'how dare they try and censor us'.


Please try and remember that the entire objective of this attack is to set everyone at war with each other. Let's not fall into that trap here.

The matter of whether or not censorship increases or reduces the propensity for attacks like this to happen is a complex and vexed one.

If you see anyone discussing the topic in bad faith, you should flag the comment and report them to the mods, and they will take the appropriate action.


Facebook and YouTube would sure as f--- fix this problem if, you know, it was the law that they hold liability for any and/all content that was hosted on their platform that they were, you know, profiting off of by displaying ads next to said content.

But that would require regulations which is bad for job creations or something.


The US has shootings this big most years now. Las Vegas festival shooting, 2017, 59 dead. Sutherland Springs church shooting, 2017, 27 dead. Orlando nightclub shooting, 2016, 50 dead.

Yes, guns don't kill people, people kill people. But for a high body count, you need a good assault rifle like the AR-15 family, the choice of most mass shooters.

Streaming your mass shooting is fully supported by off the shelf products. GoPro sells an official mount for mounting a camera on a rifle.[1] Video of deer hunting with this.[2] Instructions from GoPro on how to set up your camera for live streaming.[3] It's so easy now that anyone can do it.

And it's all legal in most US states, right up until you pull the trigger. America, Land of Guns!

[1] https://www.amazon.com/GoPro-Sportsman-Mount-Official/dp/B00... [2] https://www.youtube.com/watch?v=zAOTO5ydGbE [3] https://gopro.com/help/articles/block/getting-started-with-l...


I don't follow. Ban certain guns and GoPro? What happens when someone crazy starts stabbing people with hunting knife? Ban knives? Ban live streaming? I don't follow how you turned a shooting in NZ into a `America, Land of Guns` rant.


Because it turns out that where this happened, New Zealand, has much laxer gun laws re: semi-automatic rifles, then where the shooter was from - Australia.

Quite directly, this person went somewhere where it would be easier to acquire the guns he used in the attack.

Unlike America, New Zealand is probably going to implement Australian style laws after this.


Stabbing people with a hunting knife is not as scalable as mowing down people with an AR.


Here in the U.K. we’re more or less having this conversation about knives. A lot of stabbing occur, but notably you don’t get hundreds killed and injured from one guy with a knife. Still, knife crime is being used to justify what Americans call “stop and frisk” and what knives are legal to own or carry. I’m in the middle on the topic, insofar as I’m against the usual Tory law and order rubbish, against stopping people to search them, but I agree that you don’t need to walk around a city with a machete or even a kitchen knife. On the other hand predictably the government is taking the opportunity to squeeze social media over what they’re allowed to show. It’s all very cynical, but at least our yearly body count per capita is dwarfed by America’s. We don’t do everything right, but we get that right at least.

And sure, even if we get rid of all the knives, there are still assholes who throw acid at you, but again the scope and severity is reduced. New Zealand had a terrible mass shooting many years ago, and much like Australia restricted access to certain firearms (https://en.m.wikipedia.org/wiki/Aramoana_massacre). I expect they’ll further restrict firearms going forward, again like Australia, and while they’ll still have violent incidents they won’t lose 50 lives each time. https://en.m.wikipedia.org/wiki/List_of_massacres_in_New_Zea... https://en.m.wikipedia.org/wiki/List_of_massacres_in_Austral...

Look at what happened after Port Arthur. The worst attacks were vehicular and arson, and still it takes adding many together to add up to one mass shooting. There is a reason why people outside of America don’t act like guns aren’t part of the problem, and an easy part to address. They’re willing to accept the value of harm reduction, because they look at how many were killed and wounded in just Las Vegas, or the nightclub in Orlando, or Aurora. Look at this list! https://en.m.wikipedia.org/wiki/Mass_shootings_in_the_United...

America lost more people in the last decade alone to mass shootings than NZ lost in all of their massacres with all weapons and in all circumstances (including wartime) since the 1800’s. In a list of the worst mass shootings in history, you have ISIS terrorism in France, Al Qaeda terrorism in Kenya, Anders Breivik in Norway, terrorism again in Kenya, and the next is Las Vegas. The rest of the list is mostly terrorist incidents in Africa and parts of Europe, mixed with a huge number of mass shootings from average citizens in the U.S. https://www.worldatlas.com/articles/the-deadliest-mass-shoot...

Maybe guns aren’t the problem, but some combination of factors is and Americans as a group seem unwilling or unable to confront those reasons effectively. Other nations do, NZ has and will, and they won’t tear each other apart doing it.


I don't know how you managed to turn a story about a gun massacre in New Zealand - a country with strict gun control laws - into a critique of Americas loose gun control laws.


> GoPro sells an official mount for mounting a camera on a rifle

Is this something GoPro should be ashamed of? 99.99999% of gun owners aren't criminals.


You're off by about 3 orders of magnitude.


> 99.99999% being off by 3 orders of magnitude.

So you're implying that 99.99% of gun owners aren't criminals then?


No, I'm implying you should prefer accuracy over hyperbole.


ironic


how


The video in question was not filmed with a gopro on the rifle. The gopro was on his hat/helmet. Perhaps you should be raging against helmet mounts for gopros, which facilitate this sort of crime no less than gun mounts?

(In fact, the helmet mounts facilitate live streamed mass murder more than rifle mounts do. Mass murderers often bring along numerous guns and would have to spend time swapping the camera between guns.)

Edit:

Incidentally, those same rifle mounts are for sale in Germany and the UK (and likely many other countries):

https://www.amazon.de/GoPro-Halterung-Schrotflinten-Revolver...

https://www.amazon.co.uk/GoPro-ASGUM-001-Sportsman-Holder-Ca...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: