Since we're on the topic of automated algorithms, one thought I had, which I have trouble wording the right way (maybe someone can help me out here):
My problem with a lot of big tech compared to traditional companies is that they get to benefit from the economies of scale of software that affords them great profit margins and not deal with the consequences. For example, people constantly say that there's too much content for Facebook to moderate, etc. Then maybe this is a broken business model? Compare this to a mortgage servicer that is regulated to have sufficient customer support. Their budget to do this grows much more linearly while Facebook's budget for support, moderation etc grows far more logarithmically simply because they barely have any support.
I'm not suggesting Facebook should be policing content per se, but they should be required to handle incidents in a timely fashion whether it's removing illegal activity (or reversing a mistake thereof), helping victims escalate harassment to authorities, or whatever.
Yes, this costs a lot, but perhaps that should be the inherent cost of running such a platform.
Companies like Facebook & Google are able to scale up in part because they're avoiding spending money and time on moderation & customer support. If they were legally required to maintain a certain level of support, then that would help curtail their speed of scaling up, and probably be a good thing in general.
Comparing customer service of Facebook to a mortgage servicer is tricky because of the value and the volume. Facebook has orders of magnitude (I'll get 10,000x more) content to moderate than a mortgage servicer where they really only answer questions when something goes wrong (3x every 5 years) the stakes are a million times higher.
The options for social media are really free (or cheap) to use and occasionally bad moderation or pay $5 per post or comment ($10 for risky ones) that see multiple human moderators to limit abuse and false positives. Or no social media. My point is the amount and quality of moderation people say they want is more expensive than they'd pay for these services, but there's still a lot of value to them, so we're left with imperfect moderation.
So instead of the company bearing the cost of proper moderation we as a society are perfectly happy to let them offload the cost as an externality on all the rest of us? Before social media existed the world got along just fine - if these companies are causing problems then they need to address those problems or they can go back to not existing... both seem like perfectly reasonable options to me.
I agree with requiring them to cover any new costs they’re creating for society as a whole, but it definitely isn’t the case that “Before social media existed the world got along just fine”.
Racism, sexism, homophobia, sectarian terrorism, and that’s just what I remember from childhood. Go back further, Cold War nukes and McCarthyism and presidential assassinations, Trade Union strikes and The Winter of Discontent, the Winds of Change and the Cypriot war of independence[0].
We’ve always been like this; the question is if social media is making it worse, and how much worse.
[0] the Cypriot war of independence isn’t ever discussed in the UK, to the extent that until a few years ago I assumed Cyprus just used to be part of Greece and not under British colonial rule
- Avoided negative externalities. FB/Google get to ignore the costs of insufficient moderation by imposing those on their users or third parties (say, the victims of the Myanmar genocide).
- Monopoly rents. By securing a position in which they are the arbiters of access control, these giants can set both prices and terms for participation.
There's another element you're suggesting to me as well, which I don't think I've seen anyone articulate before: large monopoly players such as Facebook and Google effectively emerge into a "naive environment", one in which the scale which they provide has not yet evolved threats. Any threats which do emerge co-evolve with the monopoly.
Subsequent entrants don't have this advantage. Whilst they do have the benefit of techniques for combatting such threats, they must address the threats from the beginning.
The threats might variously be spammers, various network attacks (most especially DDoS), sockpuppet and other misrepresentation and manipulation attacks, harrassment, and the usual laundry list of Trust & Safety issues.
The notion of co-evolving threats comes from biology generally, though I'd strongly recommend Kyle Harper's The Fate of Rome which postulates that civilisations (themselves social networks) and their plagues co-evolve alongside one another. I've just learned that the Santa Fe Institute crowd are fans of Harpers (Geoffrey West and David Krakauer, in a recent Complexity podcast episode), as well as Neal Stephenson.
- "Insufficient" moderation is not a cause for an externality. The default level of moderation in any system, whether in nature or on the Internet, is zero. To imply otherwise is is fallacious thinking. There is no duty of care on Facebook's part to prevent bad actors from using its services anymore than there's a duty of care for a random passer-by to stop a mugging.
- How does monopoly rent exist when nothing is being charged to the to the end-user?
We've seen what the natural state of even modestly-sized fora with no effective moderation is. They simply don't function.
You'll find this in real-world spaces as well, in the forms of attractive nuisance, negligence, joint liability, and other forms. There are reasons venues provide security in potentially volatile crowd situations, from bars and nightclubs to sports arenas.
The news there is that the system is so bad that it has a very high false-positive rate, including something that should never have been a false positive in any proper system. That's like the police arresting the mayor of San-Francisco for a robbery that happened at the same time in Miami, and was perpetrated by a male. Surely, they let her go after the lawyers got involved, but wouldn't you call a policeman who makes so obviously wrong arrest an idiot unfit for the job, regardless of the fact that the error was later corrected? Wouldn't you call for re-training, or maybe even firing such a policeman, even if you agree that there should be police in general and police work is important?
I think if we design artificial idiots, and entrust them to gatekeep our society's communications, it's newsworthy to point that fact out.
I always assume that these kinds of automatic censoring are largely based off user reporting. Arresting the mayor of San Fransisco for the robbery in Miami makes sense if hundreds of people called the police to report the mayor was the culprit. It's likely a bad arrest, but it's worth at least figuring out why so many people are reporting him.
Unfortunately that allows bad actors to game the system, but not quickly responding to reports allows the truly heinous content to remain on the site for longer.
I kinda have hard time believing a lot of people would report а Merry Christmas post as discriminatory. I'd believe it about any other political post, and I know there are some people who are triggered by any mention of religious holidays, but there wouldn't be hundreds of them flocking to brigade а relatively obscure Canadian politician. It just doesn't look like a realistic scenario. I think it's a pure artificial idiocy. I may have some conspiratorial thoughts about why artificial idiocy FB is designing leans in this particular direction, but since I don't have any data I'd just leave it at that.
He's Canada's version of a member of the House of Representatives, and it's the kind of thing that's probably done by relative number of reports vs number of views so being less well known would make it easier for a small group to achieve.
Not saying this justifies anything, but at least the reasoning for the high false positive rate is simple.
Facebook gets a lot of bad PR for having lots of misinformation.
It makes sense from a business perspective to overcorrect, because general public is way more angry about potential misinformation than people having their normal posts taken down.
Again, not saying it’s justified, but having 1 major anti-vaccine post get through would be way worse for public image then 100 Christmas greetings being taken down and then corrected.
"because general public is way more angry about potential misinformation than people having their normal posts taken down"
It may feel that way from inside Facebook, headquartered in a place so left wing it's basically decriminalized theft. It doesn't look that way from many other parts of the world, where many people believe that "misinformation" is a dystopian propaganda term meaning any statement unacceptable to the hard left, that "fact checking" is a joke and that all kinds of important and useful information is being censored by tech firms.
Imagine if this guy wasn't a Canadian MP, and instead just some random person without reach or influence. Do you think this would still have gotten noticed solved as quickly as it did?
Stuff like this always reminds me about this story [0], that became only a story because the editor of a rather well known Norwegian newspaper ran into that problem himself.
There is no telling how many people before him ran into the same issue, and just went; "Well, I guess then I can't do anything about it" because they didn't have the ability to publish this issue in a renowned traditional news media, which actually gets it attention.
At best those other people might have described their experience somewhere, and probably got called conspiracy nuts for it.
"Oops the algorithm did it" cannot be an excuse for everything. Oh the algorithm did it? Then hold the people who developed/pushed for it responsible then.
Oh it's hard to make algorithms that don't make terrible decisions? Maybe then you should remove those algorithms and, say, have an ad review team.
Isn't approximately half of the Facebook workforce doing manual content moderation already? I was under the impression that it's an enormous time suck and morale killer that lets a ton of awful stuff through, hence the desire for automated solutions.
In that case FB doesn't have enough workforce. If it's impossible to have enough workforce then scale down FB. It's not heating or water, if it goes away tomorrow as a Christmas present we can easily do without it as we always did. Actually we'll be better off IMHO.
If Facebook were forcibly broken into several federated social networks, then ones that kept making "mistakes" like this one would lose all of their users to ones that didn't.
If DOJ broke up FB, it would be split along business units. There is no possible outcome in which the core FB product would be split into "several federated social networks," which is a different technology and business altogether.
The phone system was already a large collection of inter-operable regional system. Facebook (the core product) exists as a monolith. They would have to invent, built out, and scale up a completely new technology stack to support federation. That's not on the table.
The potential break-up of FB would be along business units (Oculus, Insta, Ad Network, etc).
Because it's just another example of the decade-long trend where the former middle class is being gradually turned into corporate drones priced out of having a family, house ownership, savings or retirement, and all things that were associated with the good old middle class days are being carefully erased under various noble guises.
This would have been a mistake if it was facebook's first rodeo, it's not.
For the last couple of years anything that used to be "Merry Christmas" was actively suppresed on mainstream social networks.Replaced with Happy holidays, whatever.A company adjusting this is fine,obviously(you don't have to repeatedly wish happy X, you just do it one time), however in the case of individuals where it's more common that a person only says Merry Christmas/Happy hanukkah/etc, we've repeatedly seen that such messages get "lost".Again, it's not a mistake, but then again if you point out that facebook actively engages in this behavior through leaked documents, nobody bats an eye.
It's possible that content-based suppression happens, but it's also possible that the genericized phrase organically generates more engagement by virtue of it being likeable to way more people. If all your followers celebrate the same holiday then it should be a wash but if huge chunks don't then it's purely the numbers doing their thing. Even some who celebrate the mentioned holiday might find the message exclusionary enough to not engage (a woke feedback loop, you could say), further burying it.
I don't think they're talking about just getting the most likes, or whatever but rather censorship via shadow-banning or outright deleting posts. I have seen it happen, and suspect there are those that work to "poison" the algorithms to get them to shadow ban the political side the hacker doesn't like.
No it is not an extaordinary claim at all. Moderators on any random forum can set these tools up.
And yes I literally had my post shadowbanned for a benign statement that apparently had the wrong keywords in the wrong arrangement or something. So yes in that sense I have evidence.
I agree entirely with your point, but I'd take it one step further. Christmas for a lot of people (even those that do celebrate) can be a very depressing and lonely time of the year, even more so in these post covid times. I prefer a generic "happy holidays" as it applies to more people, though I've moved onto "happy solstice - the days are finally getting longer (or about to)". I think that's a little bit of positivity that it's actually worth reminding people of.
This year, Merry Christmas and Happy Hanukkah are almost a month apart (Hanukkah started on Nov 28th). So you'd have to either have Merry Christmas in November, or do it twice anyway :)
based on the dates that retailers put out all the christmas stuff immediately after halloween, it's apparent that christmas has already gone retrograde by date far into november already...
there's stores around here with their whole aisle of valentines day candy, gifts, red heart balloons etc already stocked.
There's a fairly rational psychological explanation to it. When people cannot achieve a certain goal for a while, they convince themselves that the goal is not worth it, or tainted in some way.
Christmas has been traditionally associated with family, kids, happiness, seeing retired grandparents, etc. All these things have been sliding out of reach for the middle class with no reversal in sight. So this creates tension against people that can still afford it, and a perception that these goals are unfair and should not be pursued.
This is to be expected - it's not like the journalists writing the politically correct articles, HR people defining acceptable language, or moderators deciding on what gets approved are paid enough to afford what used to be a regular middle-class life 2 decades ago. So they rebel against it by trying to stigmatize it in the way they can.
And it's not like you can blame them either - the human brain is very adaptive at picking goals and means. If you take personal prosperity through hard work out of question, people will find other goals to pursue, it's just they will be much more divisive...
I think it's simpler to say that "Merry Christmas" vs "Happy Holidays" has become this point of contention because Merry Christmas only applies to Christians, whereas Happy Holidays does not.
The divisive, bi-partisan nature of American politics has just turned this small distinction into a fight.
Really it doesn’t. I’d bet that the majority, worldwide, of people celebrating Christmas this year are not practicing Christians. Most won’t go to church, even on Christmas Day.
Sure, but most of them are probably "culturally Christain" in a broad sense. For example, most of them would be able to explain in broad terms the religious significance of the festival and feel comfortable following a number of religiously tinged Christmas traditions. It's worth bearing in mind that most religious festivals are not celebrated only by devout followers of the relevant religion.
Oh, I think there's more than that. It's somehow not divisive for Jews to congratulate wide audiences with Hanukah. It's not divisive for Muslims to congratulate their audiences with Ramadan. It's not divisive for Chinese to wish others with happy Chinese New Year. But it's somehow "problematic" to say "Merry Christmas".
In all honesty, I can't imagine being offended if someone congratulated me with a holiday they celebrate and I don't. I would congratulate them back and maybe ask how they celebrate it to learn something new. On the other hand, if the Christmas experience was something the previous generation used to have, and I got priced out of it, "Merry Christmas" would sound like someone is rubbing it in...
Very few are actually offended, but tons of people know that there's a widespread discussion about not offending, and avoid poking the hornet's nest for that reason. Not quite analogous to, but eerily reminiscent of, the Latino/Latinx idea.
The alleged “War on Christmas” (and the idea associated idea that it is somehow no longer possible to say “merry Christmas”) are by no means a new phenomenon. They have long been favorite talking points of Ron Paul and the AFA, for example.
If the President of the USA can say “merry Christmas” without anyone complaining, then I’d have to say that the scale of the issue is (and always has been) enormously exaggerated in the rough and tumble of American culture wars.
No its not. I am at college and I have been this 2 times this year. 1 time I said "Merry Christmas" to a group and a girl found me later 1 on 1 and said that I should "think about being more inclusive" because different people celebrate different holidays in this season. Second time I said it to a different group when I arrived and a girl straight up interjected "And happy holidays!" Right after i said it, she wasn't arriving with me or anything.
As we all know: just because you havent personally experienced something doesnt mean it doesnt happen :)
Btw my college isnt even super leftie wing so I was surprised.
No? Then why did a Merry Christmas by a right wing politician message get automatically flagged specifically for being "discriminatory"? Obviously someone somewhere thinks it's at best a gray area.
I think john_moscow's theory has a lot of merit, although I think it's more related to general D&I ideology than people being priced out of Christmas specifically. They're just desperate for ways to show how much they care about foreigners vs local people, and this is one more way.
I don't like how those tin foil hats I used to laugh at were just ahead of their time. Your TV spying on you or your phone apps waging war against Christmas ...
> FB biases are too extreme. The best thing for FB is to be broken up.
That's a non sequitur. Breaking up FB will not de-bias "the algorithm". We'd have one company doing VR, one doing phone calls, one doing image sharing, and finally a company running the old social network with the exact same recommendation and content-moderation system.
When Bell was broken up, it was into regional companies that were forced onto a common interchange that accepted traffic from competing telcos. I'd desperately like to see the same, where social media sites above a certain size are required to import/export data and play nicely with eachother's content.
Exactly. I don't understand how forcing a spin off of Instagram or however you want to organize the breakup resolves any of the problems HN has with FB.
Instagram would still be run by the same people, owned by the same people, and run with largely the same goals (profit, expansion, etc...)
Honestly at this point, I don't really care. I never thought I'd hear myself say this, but breaking up FB as a matter of principle may be a better idea than letting them slowly take over every tech sector.
The main problem is that it didn't seem to have much effect. Fast forward one generation, and AT&T is still pretty much a monopoly in most areas. It would be fascinating to know what long-term impact it actually had.
Breaking up a large company may benefit the company itself, too. Chaos is opportunity -- this is practically the thesis of startups -- and nothing would be more chaotic than to suddenly split Facebook's power into subunits, each with their own independent leaders and their own agendas.
The social network would be the most powerful subunit, sure. But it wouldn't be able to pool its profit with all of the others, which has a very real impact on the power it can wield. For example, imagine if Facebook wasn't able to acquire Instagram at all. What sort of empire would Instagram have built? We can't know. But if Facebook can't buy competitors, it unlocks more options for knocking down the castle.
Honestly, my main problem is that I'm having trouble coming to terms with why I want to see Facebook demolished. I don't want to turn into a hater. I guess my most powerful data point is that they ruined Oculus, a fact I'm still saddened by, and now they're trying to appropriate whatever the Metaverse will turn out to be.
One could argue that some company will do that, so it may as well be Facebook. But that logic doesn't work too well for a company that controls so much already.
I share a similar sentiment/observation. I am OK with the world where the occasional mistake occurs with algorithms. But when things like this continually happen with Facebook algorithm, it feels as though they are trying to outsource too much but should involve humans to computers.
In short, their tools tend to be overly restrictive, airing on the side of rejecting. I’m not sure thats right
It's a statistical inevitability that this will happen all the time at facebook's scale, even if you have an algorithm with a %0.00001 false positive rate, because they deal with billions of items per day. It's really actually fairly rare unless your purposefully bashing your content against their stated boundaries, but huge numbers make thousands of these events happen per day around the world.
Just like society has learned to accept certain rates of death, crime, medical imperfection and more to have a functioning society, we will have to come to terms that algorithmic moderation will also go wrong and we will have to pay for it, with $$$, one way or another. It will never be perfect, because what is perfect is different to different groups.
I don't think an appropriate reaction to an instance of murder, robbery or medical malpractice in our society is "well, as a society, we learned to accept certain rates of death, crime, medical imperfection, so move along, nothing to talk about here". That doesn't look like a society I am familiar with.
It is the society you are familiar with, it's just everyone treats it as normal and what are the limits of possibility so you are probably blind to it.
Some relatively extreme examples:
1. There isn't a cop / security person in every corner, to really prevent murder and robberies. After a certain point, society decided it wasn't worth the cost to prevent murder in this expensive way. You should dig into what the murder / robbery solving rate is in your own home, you might be surprised!
2. Cars kill a lot of people, all the time. We would save lives if the speed limit was reduced to 10kmh. But that makes cars not very useful. Our cars are not volvo tanks and miss a lot of safety testing that volvo does and goes above and beyond, partly because of cost.
3. We don't put highway style concrete barriers around every road boundary to prevent human death.
4. The success rate of medicine to save lives and prevent disability is accepted to be not perfect and socialized medical systems do not spend unlimited money to save X amount more of people.
5. Obesity is probably the largest killer and life reducer of humanity in the USA today, yet there isn't anywhere near a proportional response by US society to fix this like there was with COVID, smoking or 9/11.
6. I don't know if your american, but the entire insulin price jacking controversy kills people constantly today in america. That society has basically said the rich pharma companies rather make more money and kill people who cannot afford this insulin vs. save lives.
All of these things are way worse the 5 9s of reliability that are algorithmic moderation systems, but we accept the tradeoff because sometimes, the cost is worst than the cure. And sometimes, for way worse and gross reasons, like the american insulin example
Sometimes it is. But I do not see any reason why auto-moderation system that does not ban "Merry Christmas" should cost immensely more than one that does. I can see why 10mph speed limit would be very costly, but why not having a broken model is more costly than having a broken one? Where the costs are coming from? I don't think the costs are the problem here. They spent a lot of effort on attaching various warnings to every post mentioning covid, vaccines, etc. They could spend a little on making their models not to suck. They didn't. I don't think "it would cost too much" is a valid excuse - at least without demonstrating why it'd cost too much.
Somehow you find the idea that somebody could be against government coercion and still hold private individuals (and companies) responsible for their own actions contradictory?
Not only conversatives. Liberals are also upset about vaccines misinformation, election fraud, etc.
My point is that you have to 'train' the AI models somehow. FB introduces its own 'biases' on those models. They admitted this 'mistake' only because it affected a Canadian MP, otherwise I don't think they would have this for you or I. Their 'discrimation' policies are too broad for an automated system.
FB exhorts too much power over information.
I think the point they’re highlighting is that this is a “free market success story”. The private company did what it wanted (automated moderation), and ultimately the removed post got reinstated anyways.
Meaning it’s hypocritical of conservatives to be offended by this. Liberals are less for unregulated free markets, so it isn’t hypocritical of liberals to be for regulation or whatever to prevent private companies from doing this or that.
There's an important nuance in between though, such as defending freedom of speech while still objecting to what someone says.
It might be hypocrisy if the MP calls for a govt intervention (assuming he was some kind of free market advocate), but it was the poster here calling for breakup.
The “free market” we sipport really is competitio. Competition drives invention and progress.
With FB, they have no direct competition and thus can control the free market of ideas.
That is the problem.
I this case the idea od wishing a merry christmas? Where does that end you think?
1) If they seriously don't like it, they can change it (Gab, Paler, Truth Social).
2) When people do complain, they negotiate directly with the person they are complaining to instead of going via the government.
A non-free market would be much worse for the conservatives because the Silicon Valley standards would get enshrined in law and a competing platform would be taken out with lawsuits for allowing different opinions.
The interesting thing is that when attempts are made at alternatives, we have people doing everything in their power to stop the alternatives from existing.
Look at Parler for example: They did not create a secure website, that is bad on them. Then activists stated targeting them and coordinated to get as much material off their site. The activists used Twitter to do this: https://media.cybernews.com/2021/01/crash-override2.jpg
AWS then stopped providing service to Parler due to the content they were hosting exposed by this action. Well within their rights! But they excuse the same sort of content hosed by Twitter....
At some point people are looking at this and saying it is not a free market but a "Good Ole Boy's Club" and something needs to be done.
Personally, I think it was stupid for Twitter to ban people based on the message they send. But that is because I would like to be able to see what everyone is saying, not a small subset of people that uphold the "right" ideas.
The saying is that "Democracy dies in darkness"; We seem hellbent on throwing shade over every idea, opinion and inconvenient fact that comes along. We are bringing the night.
> AWS then stopped providing service to Parler due to the content they were hosting exposed by this action. Well within their rights! But they excuse the same sort of content hosed by Twitter....
> 1) If they seriously don't like it, they can change it (Gab, Paler, Truth Social).
No they can't. Remember how all the major companies coordinated to shut down conservative sites they scapegoated for the Trump riot (which was actually organized mostly on Facebook)?
That was a dirty trick back at the time, but the free market seems to be working here. It isn't the "Nice Market" or the "Friends Market". It is a Free Market.
Even if we take it as read that that's hypocritical (and plenty of Conservatives are not in favor of free markets) where are the Liberals railing against the new public square being owned by unaccountable private entities?
Neither of these cases are slam-dunks, but I'm a bit tired of people saying that the fact a website's private property necessarily means the owners have absolute control as if there were no possible objection to that stance.
Facebook sucks and I never use them. That being said, Facebook moderators who manually review reports are routinely traumatized and diagnosed with PTSD, and Facebook catches fire for that.
So I don't see how this can go both ways. How can we ask Facebook to automate this system and flame them for it not being thorough enough, and then simultaneously flame them for implementing the thorough system of human review? As much as the company sucks, this seems like a no win scenario that isn't fair even to a reasonable company.
Sounds like an unsustainable business model. Why should we care? Set the rules / laws that are good for society at large and let them figure out how to make a profit or fail IMO.
When it comes to big tech I keep seeing people try to solve their problems for them. We should just legislate. If they can’t make it work someone else will fill the gaps.
Might there be an option of 'put money down, demand a review before arbitrage, get your money back if you win' solution?
The money should filter out spammers. The price should cover the actual costs to defend against DOS.
The biggest issue would be that bad actors could get insight into how Facebook detects. But maybe more openness here would be good!
I don't think this should be automated, but I also think there aren't enough resources provided for moderation. It's a thankless job that requires supporting resources (like therapy). However, it's treated purely as a cost-center to optimize.
This is one instance of someone with a clear political affiliation; the question is whether it would happen to someone on the other side, or with little discernable political affiliation.
I wouldn't cite this as an example of "oppression", but it's certainly a prominent egg on Facebook's collective face. With their resources, one would think they could make a model do better than label "Merry Christmas" as "discriminatory message". Unless they just don't care - and then conservative Christian minority has a valid basis for complaints.
Maybe the FB algorithm takes past posts into account? His account has probably been a fountain of misinformation and hate, and now he sends one innocuous message and is surprised it gets blocked. Yawn.
Funny thing about the plural first person pronouns (we/us/our): it can mean several different things:
1. Me and them
2. Me and you
3. Me and you and them
Your comment makes sense if GP meant 2 or 3, but if GP meant 1 then you're each saying the same thing (well, if you'll let your version be a plural second person, anyhow).
Some languages do have different pronouns for that, but unfortunately Indo-European languages lack this capability. It is called "clusivity". Outside IE languages, it seems to be a pretty common thing.
If I spelled Muhammad as "Moohatmed" on purpose to indicate my dislike of Muslims I'd (correctly) be called Islamaphobic. So stop being Christianphobic.
That's not what Christians believe. I've had Muslim friends who prayed for me when I had tough times and I didn't complain because I didn't practice their religion. It's the thought that counts. I think you focus on that part instead of the fact that somebody wishing happiness on you, probably from some existing resentment of Christians.
On this day, Thor's Day, I wish everyone here a "Merry Christmas" celebrating the Yule season. How appropriate to have our saturnalia on Saturn's Day this year!
Edit: it looks like your account has mostly been doing that. We ban such accounts, so if you would please review the guidelines and fix this, that would be good.
First of all, here is a selection of the Findings section:
"(3)The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity."
...
"(5)Increasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services."
Obviously the intention of the law is to preserve diversity of political and cultural discourse. Not eliminate entire points of view which is the point in question here.
Next, here is the pertinent section of the law that I feel covers the
(c)Protection for “Good Samaritan” blocking and screening of offensive material
(1)Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2)Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]
In (c)(1) above it's important to define "interactive computer service":
> (2) Interactive computer service The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
While it's true that at their basic foundations one could call Facebook an “interactive computer service” it has become SO MUCH more than that. It controls massive quantities of communication and the dissemination of editorials and diverse opinion. They can't hide behind this one vague definition with a straight face.
Just as a news organization would be held in contempt of the public interest for selectively editing a recording of someone to make them appear to say things that they didn't say, it's equally unethical to call oneself a provider of an "interactive computer service" while selectively editing the flow of diverse viewpoints and opinion to essentially accomplish the same cultural effect of selectively editing a recording.
Now, in (c)(2)(A) the "interactive computer service" is allowed to take steps to filter at its discretion what it deems "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." The last bit otherwise objectionable is essentially a blank check and really should not have been in the law. The US flag may in some circles be considered "objectionable". Should this give Facebook the right to censor patriotism to the US flag as it may be "objectionable"?
The problem is that "objectionable" is based on opinion where the other criteria are more closely definable.
I am all for blocking some speech. I just believe that, for example, incitement of violence means that one tells another to hurt another explicitly. Not implicitly.
I cannot recall any misinformation campaign as spectacularly successful as the one that has convinced many Americans that the GP post's publisher/platform distinction is actually the law. And they don't seem to have had any special trick to it, they just repeat the lie enough times that people believe it.
It's impressive considering how readable the text is. I'm especially impressed with ted Cruz who is a lawyer and literally works in the same building as the guy who wrote it but still manages to spend his days misrepresenting it.
>And they don't seem to have had any special trick to it, they just repeat the lie enough times that people believe it.
That is the special trick. It's a technique often attributed to Nazi propaganda minister Joseph Goebbels.
You don't need to convince people to believe a lie, despite arguments about "truth being the best disinfectant" and "good speech being the solution to bad speech," because belief is not a process of logical deduction, rather it's an adaptation to stimulus. Surround a person in an environment of coherent lies long enough and their minds will simply adapt to it.
Absolutely! Indeed, the protection was crafted specifically for platforms to engage in moderation without being held liable for the content they host. The whole point is to encourage moderation.
But section 230 is not a blanket immunity for anything a platform may wish to do with their content. If the actions of the social media platforms fall outside of section 230 protections (and I think such a thing could be argued) it is on the following grounds:
1. One of the stated purposes of this legislation was to maximize user control over what they view (230[b][3]). Social media censorship and algorithmic feeds take away that power.
2. Protection from civil liability extends only to those actions taken in "good faith" (230[c][2][A]). I'm not fully prepared to to outline what I think is meant by good faith here, but I think it is fair to say that not all moderation actions we have seen are "in good faith".
Outside of section 230, I also think that platforms which employ "fact-checkers" should be open to defamation suits if they make false statements of fact. Recently a court ruled that these fact-checks were "protected opinion", which is utterly stupid.
> If the platform picks and chooses what goes on its platform then it's a publisher, not a platform. Thus it loses it's Section 230 immunity.
This isn't true and not apart of section 230.
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
> No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
The discretionary removal of content is explicitly protected, exactly opposite of what you are claiming. Please stop spreading this easily refuted false information.
> If the platform picks and chooses what goes on its platform then it's a publisher, not a platform.
As far as I'm aware, picking and choosing, moderation, and editorial decisions, don't waive section 230 immunity. Although that is a US law, and the MP in question is Canadian, so I'm not sure how the jurisdictional issues would be decided.
That's a great idea! Let's ban all moderation! What could go wrong?
I, for one, am so tired of Google CENSORING these so-called "spam" emails! How DARE they declare themselves the arbiters of what is and isn't spam! It's an OUTRAGEOUS ABUSE of POWER!
Ditto for Twitter, where "end all moderation" seems to be working fine. (I am in fact writing that with a straight face, because the day-to-day experience of being on Twitter is very different from what you hear in the headlines.)
Of course, being on a highly moderated site, it's unlikely that a viewpoint like this will seem reasonable to most readers. But it can work.
The problem happens when (like here) the goal is to show people specific things, rather than people choosing their own things. Facebook, I think, doesn't really want people to choose -- the algorithm is basically why their business prints money, and the algorithm has its own optimization agenda.
I know everyone is pointing out that's not what it says but there is a solution hiding in plain sight. Give us the moderation tools that Facebook/Twitter etc have.
For example, Twitter routinely announces things like "we cleared out 50,000 fake accounts". Why can't I see the little score that an account has telling me if it's likely to be a bot, or a troll? Twitter has this info, why can't I?
Why can't I filter out all people who appear to be X, Y, or Z? It could be anything that the social network's moderators can see. Why can't I have proper search? Why can't I order my feed according to how I wish it to be? I bet moderators can get this kind of thing when they're looking at posts.
The answer to all of these things in the media is usually to limit speech because someone was nasty to someone else. Just let us have the tools, give power to the people instead of taking it away and treating us all like children.
The article says Facebook considers this a mistake in their censorship system; they aren't ok with banning Christmas messages. Although keeping an eye on them for signs of overt racism and religious bigotry is a good idea given the amount of power they have and their willingness to exercise it over political topics.
It's easy to blame the algo or some low level staffer when a platform gets called out publicly for blatant censorship.
The ex-CEO of Twitter, Hipster Rasputin whatshisname, admitted that censoring the Biden laptop story from the NY Post in October 2020, and shutting down any account that shared the story, was a "mistake" and only reinstated the NY Post account 2 weeks after they said it was a mistake. Of course he blamed the Post because they "only" needed to delete the tweet with the link to the story in order to reinstate the account.
If the ban was a "mistake" then why do they need to delete the tweet?
Twitter engages in political censorship on behalf of the left wing. But that is different from what is discussed in this article and this thread. Christmas enjoys broadly bipartisan support.
With or without section 230, platforms that don't make any editorial decisions aren't liable.
Section 230 was specifically crafted to allowed platforms to make editorial decisions while retaining good faith protections.
Section 230 doesn't actually make any distinction between a platform and a publisher. It makes a distinction between first and third party content. And it protects sites from 3rd party content liability, but not first party. So a "publisher" like the NYT can be liable for an article, but not for the comments on that article. And a "platform" like Facebook or Twitter can be liable for their own official posts, but not for user's replies.
That’s never what section 230 said, or was intended to do. In fact that’s explicitly what it allows them to do, and without this protection the moderation decisions would be infinitely worse because they would become liable.
Repeating FUD misinformation doesn’t make it true.
> There was a solution to all of this created back in the '90s. It's called Section 230. It says that platforms that host content aren't legally responsible for what people post on it. If the platform picks and chooses what goes on its platform then it's a publisher, not a platform. Thus it loses it's Section 230 immunity.
Wow that would be way to clean and beautiful of an solution to be an option nowadays
Not really. It would crush things like small forums and startups -- if you can't delete porn, spam, off-topic posts, etc. without a full-time moderation crew, then only megacorps can operate websites with user-contributed content.
Better to break them up to address the monopsony problems, and maintain the full strength of Section 230.
It's the reasonable solution to dealing with big tech's recent censorship abuses (yes, censorship; they operate as oligopolies, which means they possess censorship power).
Congress and or the DOJ refuse to go after big tech on this front, they refuse to hold them to account to Section 230. There is a contingent of the political elite that is shielding them for obvious reasons.
I think the problem lies in the vague definition of "obscene" material. Nobody would have a problem blocking gratuitous nudity/violence from making it on their platform. But that's a very wide distinction from political/religious messaging.
I’m annoyed that Apple requires Telegram to block sexual content. I perceive sex as a generally positive thing, a common source of joy.
I strongly dislike realistic violence, to the extent that the poster adverts for the Saw films are things I wish I had not seen. I think the frequency of it in popular entertainment reflects badly on the human condition.
Nevertheless, I have recently seen there is evidence showing that violent media reduces real life violence, so even despite my visceral reaction I wouldn’t make a general censorship against violence just to satisfy my personal preferences: https://edition.cnn.com/2019/08/05/health/video-games-violen...
That said, simply because everyone is different, I think there is a place in the future for automated filtering on a personal basis. Perhaps a social media plug-in that stops me seeing violence, while somebody else has a different one that stopped them seeing nudity — and not the built-in ones! When Twitter started using “likes” as an alternative retweet, they kept showing me BDSM[1] that someone else liked.
[1] For me, S&M in particular pattern matches to “violence” rather than “sex”, but (unlike the mere existence of gore films) that doesn’t mean I think it reflects badly on anyone. Like boxing, I don’t need to grok it: I trust the people-I-perceive-as-victims saying they enjoy it.
Spam detection is quite distinct from content moderation. Your email provider most likely does have some form of spam detection, but probably not much content moderation.
Also, I think most of us are not that puritan, and don't think that platforms should ban all porn.
I thought it was the middle class being anti working class myself. It conjures images of Guardian readers at a dinner party saying things like "Oh, aren't we British just so awful!" when someone brings up British tourists in Magaluf or something.
You'd never hear that from anyone not absolutely riddled with a kind of middle classness that looks like self-loathing but is really loathing anyone who likes being British. Rinse and repeat with deriding "whiteness", or whatever else is in fashion.
>You'd never hear that from anyone not absolutely riddled with a kind of middle classness that looks like self-loathing but is really loathing anyone who likes being British.
Two centuries ago, British politician George Canning had a good description of such people:
> It says that platforms that host content aren't legally responsible for what people post on it.
This works, but is only halfway there. If platforms are to be uncensored, then they shouldn't allow anonymous participation. Publicly anon is fine, so long as they can be identified if a court orders it so.
Zuckerberg's "excuse" is to blame his choices on "automated systems".
Occassionally he takes personal responsibility. Maybe he will admit "I made a mistake" a few times. But the solution is never to remove the person at cause. The Board cannot remove Zuckerberg no matter what decisions he makes. The response to the public is "We just need to fix the algorithm." Misjudgment is always portrayed as a computer issue, not a human one.
Google/Facebook want to profit from news without having to hire editors and journalists. Even worse, they want "creators" to produce content for free and then give them a cut.
If you're not disputing the story itself - then what exactly is the problem? If you don't like the rest of the content on the site, nobody forces you to read it - and unlike many other sites, this one is pretty clean and doesn't bother the reader with a ton of irrelevant ads. I think refusing to read any site that is not part of your ideological bubble is a big mistake. Everybody has their opinions, if they're not deliberately lying to you, it's completely fine to listen to them and make your own conclusions, even if you end up disagreeing with their opinions.
What exactly is the problem with this site? Political flamewar is offtopic, and your comment reads as “this is a conservative news outlet, so watch out.”
I was curious and clicked around. The site recommends using DDG as a search engine. There was a time that this wasn’t a fringe idea.
Either the story is true and fairly represented, or it’s false or misleading. If it’s the latter, it’s best to point it out.
> Now supporting Facebook, Twitter and Youtube, behemoths which represent the excess of capitalism is considered hip, cool and progressive.
Isn't that great though? If it's now hip and cool, even the dumb/vain people will do the right thing (albeit for the wrong reasons) which should ultimately strengthen the alternatives to Facebook, Twitter and YouTube.
it appears to have a large number of articles on it about how terrible "vaccine passports" are and how they're an outragious infringement on civil rights, so it's verging into outright coronavirus conspiracy stuff that contradicts factual science and reality.
as "alternative" social networks, linked from an index page off its home page, it also recommends well known far-right/alt-right/fascist hangouts gab, parler and mewe.
My local scientists said masks did nothing (then licked their lips to turn a page). I kept on wearing my N95 mask.
Then I was told that the vent made the N95 ineffective. Obviously wrong, I was certainly protected, and as a source control compared to folks wearing thin mesh or surgical masks (usually with nose out) no doubt far more reliable. Scientists wrong again.
Then I was told we couldn't go to to three mile beach with my children (deserted at 9AM when we went even during non-covid times). This had an ocean breeze onshore, huge quantities of fresh air (and sun). Supposedly it was safer to crowd into a grocery store with others. Count me skeptical again.
Then we are told that only the CDC can test for COVID. Count me skeptical.
Then we are told that antibody tests need to have full computer connected to an iphone and pay for a medical consult to use at home. Seriously? This great threat, and you can't do a test like a pregnancy strip at home? Thankfully they are backing off this now - scientists wrong again.
We are told that inflation is transitory, it's just misinformation that it might stick around a bit.
After the administration goes out of its way to block fossil fuel production in the US, they announce an investigation of oil companies for failing to keep prices low. Again, these are the "experts" in markets at the FTC.
If these scientists demanding vaccine passports spent even a FRACTION of the trillions put into this COVID (now Omicron) stuff between lockdowns and interventions into something as simple as healthy diet, exercise and socialization - what would be the outcomes?
I'm vaxed and boosted, but the blind obedience to these "scientists" and "experts" is totally ridiculous, especially the non-hard science epidemiologists - is absolutely silly.
I would not count this as their shining moment. Half the time the politicians ordering us off the beaches are squeezed in rooms with their staff and lobbyists hatching another political effort of one sort or another.
The words "misinformation" have become a near joke.
> I was told that the vent made the N95 ineffective. Obviously wrong, I was certainly protected...
You were protected, but the people around you weren't.
The vent is a one-way valve. It blocks air from coming in, so it has to filter through the mask material. It allows air to go out freely with no filtering.
So a vented N95 mask protects the wearer from external particles, but it does not protect others from particles exhaled by the wearer.
Vented N95 masks are meant for places like construction sites that may have dust and toxic particles in the air. They protect you when you breathe in, but let you breathe out freely.
They are not suited for preventing the spread of disease.
yes, because the spread of coronavirus by unvaccinated persons in groups in indoor spaces is a scientific fact. additionally, the hospitalization rate and death rate for unvaccinated persons vs vaccinated persons. and further, the severity of case and death outcome rate per capita of infections for breakthrough-of-vaccinated persons with covid19 in hospital, vs same per capita of unvaccinated persons in hospital.
Why ?
Something can be good for certain outcome e.g. reduce infection rate, hospitalization rate and yet infringe on civil rights. Those are orthogonal
concerns.
This, absolutely. There are many who are fully vaxxed and happy to promote the vaccines, but oppose the passports and mandates. It's not about the short-term gains but the long-term effects of authoritarianism.
The spread of coronavirus by vaccinated persons in groups in indoor spaces is a scientific fact too. So what? The debate is not about that, it's about whether prescribing certain medical procedure by state coercion, and limiting the civil rights of persons who choose not to undergo this procedure is a proper policy and whether the laws of the land allow the government to do that.
It's a legitimate debate which has nothing to do with "conspiracies" - you don't need any conspiracies to find reasonable argument on both sides of the debate. If you don't like the side that that particular site is on and disagree with their arguments - fine, but that doesn't mean they are lying about other factual things (they are not). There's a huge distance between disagreeing and refusing to even listen to other side's arguments, and even worse - refusing to listen to anything from somebody that once had an opinion you disagree with - this is no way for a society to function.
the logical conclusion would be that effective existence, enforcement and peoples' cooperation with vaccine passports will prevent unvaccinated persons from gathering in groups indoors and further spreading covid19, so the relevance is quite clear.
Putting everybody in solitary confinement would also prevent persons from gathering in groups and further spreading covid19. Obviously, nobody (so far) is advocating this. So you can appreciate the difference between measuring a policy only on single metric of efficiency, and comprehensively weighting overall costs and benefits. Not every efficient policy is legal, moral or acceptable.
The actual conclusion from European countries such as the Netherlands, Finland, etc. is that vaccine passports are not sufficient to cull the spread or reduce load on the healthcare system. Back to lockdowns it is.
The site is definitely quite sketchy looking - but this article is fine and perfectly reasonable so I don't know if we really need to comment on the website itself. If the website were adding unnecessary spin then maybe it'd be of note but it didn't.
Do you have an objective source that confirms that truth is actually a 50/50 ratio between message and framing? Where did you hear that from?
The truth of a statement is somewhat subjective but it's not helpful to be pedantic.
I understand that authoritative media sources often frame stories in abhorrent ways and I have tuned out the media in general as a result. However, if I hear a plausible, factual statement from even a source I abhor, I can filter through the framing to find the truth in the statement. In today's world, I think it's an essential skill.
Did Facebook accidentally ban an innocuous message? Seems plausible. Is Facebook known to engage in arbitrary flagging of content? Yes. Does this speak to the problematic nature of algorithmic fact-checking and moderation, a general theme of our era? Yes. Then it's worth discussing and it doesn't matter what source brought it to our attention.
Back up and ask why do you share anything from any source?
If you have something to say, then, why don't you just say it yourself, instead of referencing someone else saying it?
3rd party, perhaps even recognized or respected, corroboration.
The other source somehow is more authoritative and meaningful than the same message directly from yourself. This idea matters and should be considered because X says it.
But if that source says both sensible and insensible things, espouses both virtuous and venal ideals, then the source has no value as source of authority or credeance.
Rig up a speak&spell to randomly spit out statements. Once in a while, it will say something truthful.
Now, so what? Why would you ever cite that speak&spell, no matter what it said, ever, on anything?
And that's just a random neutral source, not an opinionated one that says actively and intentionally harmful things some % of the time.
Even if the speak&spell says something you agree with and want to repeat, you still go try to find some other source who says the same thing, and unlike the speak&spell, has given anyone any reason to respect.
At the very very least, you acknowledge the weak argument and say "I know it's X saying this but..."
You can find a link from his main website[0] to his instagram[1] and twitter[2] posts mentioning the issue, but given the context they're perhaps bad choices
That said, I don't think there are many non-partisan sources left. The shift from paid physical issues limited by the physical page count to limitless online space with free access killed it. Each source now tries to generate as much low-quality clickbait as possible, and predictably picks topics that would resonate well with their audience's confirmation bias, turning a blind eye on anything outside the picture.
This applies to both sides of the political spectrum. "Left" sources won't publish anything criticizing censorship, "right" sources won't say anything against Trump. If you want an objective picture, you need to piece it yourself from both sides.
Without delving into this hot mess of a conversation, I'd like to point out that linking to a google search for political news is completely absurd. Google shows people what they want to see, and what people near them have clicked on. I don't use google very often so most of the hits are his official stuff; pretty bland. If you want to show folks whatever dirt you're trying to show (and, I feel the need to say, you're deep in flamebait here and I don't recommend it) you need to post a link to the actual content.
The 'censorship' narrative is wrong and tired. FB/Meta has no obligation to carry your speech. Thank God, FB/Meta is not the government and can administer their privately owned systems any way they want and you do not have to like that. You do not have to use FB/Meta.
To be pedantic, Facebook definitely does have the capability of censorship:
> Censorship, the suppression of words, images, or ideas that are "offensive," happens whenever some people succeed in imposing their personal political or moral values on others. Censorship can be carried out by the government as well as private pressure groups. Censorship by the government is unconstitutional.[1]
Facebook has enough power over communication at large to effectively suppress speech should they choose. Censorship does not have to be perpetrated by governments. Hence the terms self-censorship and corporate-censorship[2].
What people really mean when they talk about censorship is first amendment rights. Here, you are correct. The first amendment protects against government censorship only - nothing Facebook does could infringe your first amendment rights.
> The first amendment protects against government censorship only - nothing Facebook does could infringe your first amendment rights.
This is true, but remember that there's a lot of terrible things that the Constitution doesn't prohibit, e.g., murder. The fact that something isn't Constitutionally prohibited doesn't mean that there shouldn't be a law against it.
I was making a very pedantic argument. It can simultaneously be true that Facebook is guilty of censorship, and they are in the legal / moral right by doing so. I have complex feelings on the topic, but that most accurately describes my position.
I'm not saying Facebook is infringing my rights (I explicitly meant to say the opposite). But I do feel strongly that they are guilty of censorship, in the literal sense.
If you think Facebook is supposed to have rights, then are you also okay with the Citizens United decision? The answer to whether companies should have the same rights as people should just be "yes" or "no", not "well, yes when it benefits me but no when it doesn't".
Using your same logic, should Starbucks be required to let you stand in their store and say racist things? What if the manager thought you said something racist and kicked you out when you didn't actually say anything racist, is Starbucks guilty of censorship?
Would you also defend Facebook's right to delete content related to Black Lives Matter, and argue that any discussion of such is a waste of time because they're a private corporation and can do what they want?
Facebook enjoys vast legal protections that (arguably) preclude them from this type of censorship. They are constantly fighting to make sure that (arguably) doesn't start to turn into something more concrete.
> What people really mean when they talk about censorship is first amendment rights.
I don't agree. The snarky and smug "nuh uh, cuz it wasn't the government" responses [often maliciously] assume that complaints about censorship are talking about the first amendment, but I think people are generally aware that censorship can come from corporations too and their concerns are not limited to apparent violations of the first amendment specifically but are concerned with the more general principle of free speech (which predates the first amendment.)
I’m curious what happens when a number of public speech monopolies align their interests with certain political group and implicitly blend with the government. I guess this theoretical situation is totally legal, doesn’t break any amendments, but it kind of smells strange.
>Censorship is the suppression of speech, public communication, or other information. This may be done on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient". Censorship can be conducted by governments, private institutions and other controlling bodies. Governments and private organizations may engage in censorship.
There is nothing that presupposes that the entity has the obligation to carry your message. Just that the entity removes your message on the grounds that they do not like the material.
This comes up every time someone mentions that X(a private party) censors something. Yes, X is not the government. Yes, it is still censorship. If undertaken by the entity on it's own, it may not be illegal. If the entity had government influence in it's decision, than it may be.
There are some here that argue that the status quo is fine and others that argue that we need to change our laws and regulations to protect individuals from more privet enterprise censorship. That is where the debate is, not whether or not censorship exists.
Yeah, but this narrative is just as wrong and tired as the 'censorship' one. Companies are not obligated to carry your speech on their platforms, but we can and should be able to have a conversation about whether or not their methods of moderation are sound / reasonable.
No you're wrong. Section 230 determines a publisher from a platform based on what they host on their service. If they pick and choose what goes on their service they are a publisher and lose their section 230 immunity. So it's not true that they can "do whatever they want."
And they do engage in censorship. See "Laptop from Hell" by Miranda Devine
As far as whether or not these companies engage in censorship, I agree that they certainly do. It's just that it's not illegal/unconstitutional, as per the 'censorship narrative' alluded to.
Platform censorship is barely legal per my view of section 230 which I have studied thoroughly.
It's also highly immoral for them to craft their own narrative out of user generated content which was never the intention of 230. The crafting of a narrative of available information and the presentation thereof is called opinion which is the province of publishers.
It is censorship, though. Not government censorship, but don't you think that one day in the future, once these non-government censorship systems are in place, that a corrupt or evil government could abuse those systems and make them defacto government censorship systems (we're already partway down the slope with things like CSAM and copyright monitoring, although the former is especially heinous and so censorship of that type of content is desired).
Not every provider of user-generated content should have to abide by these rules, because the damage that can be done by a relatively small forum or website is usually minimal (although see sites like Kiwi Farms that may not technically directly encourage harassment but where it is nonetheless often perpetrated by sadistic individuals who participate in their forums), whereas the damage that can be done by a Facebook or a Google is much, much greater.
There are laws that do impose obligations on some privately owned systems regarding speech they must or must not carry. Being "privately owned" is not an exception to these laws in the US, let alone other jurisdictions.
Regardless, rightful censorship is still accurately described as such.
And, this doesn't seem to be a story about censorship, IMO, because it was clearly accidental. It's a story about bad ML.
Nobody is saying what Facebook did is illegal. We are simply saying that this is an obvious case of bad censorship, in a long line of facebook being a subpar censor, and that you should re-evaluate if you as a person want to do business with facebook because they will simply take down your posts wrongfully with no real appeals process unless you are famous.
I don't use Facebook, which is maybe why I don't care how they manage their service and also why it seems clear to me that there are other social media platforms, other web sites, I could make my own web site, hell I could send letters in the mail. Suffice to say a single message being deleted on Facebook is not censorship of the individual that had this issue.
Why not? I believe it should, given its size. The phone company has an obligation to carry my speech. Net Neutrality would require ISPs to carry speech. There are a whole host of things the government can compel "privately owned" systems/companies to do.
In these kinds of discussions, scale matters. There is a big difference between compelling a company with a billion+ users to act as a platform and forcing a small business to do the same.
Starting about a month ago FB changed all their algorithms and a lot of independent artists are seeing 50%+ reductions in engagement across Instagram and Facebook.
I have artistic hobby pursuits and can confirm. Many times I'll post a piece of work and my friends won't even see it in their feeds because FB is gatekeeping content between artists and voluntary followers now, in favor of "influencer" content from cash-cow accounts that don't produce any art of their own.
It's sickening to say the least. Yes it's legal as of now but we should really be having a discussion about whether it should be okay to have a non-transparent gatekeeping algorithm between people who mutually chose to follow each other.
Personally I'd prefer that if if someone voluntarily follows someone, they should have a legal obligation to not hinder any communication. That person chose to subscribe, and they can choose to unsubscribe at any time. What they are doing is tantamount to the USPS paging through your magazine subscriptions and trashing some of them as the mailman feels that day.
It's crappy clearly, but I guess I am skeptical that it'd be easy to fix with regulation, at least it'd certainly take some very careful legislating to reign in. I suspect it would founder on pretty much the same problem we started with, FB's lawyers would say "we need to make money off this circus Your Honor, so of course we stacked the deck; if these rubes don't like it, they can always switch to Ello, it's a free country."
Might be easier if we repeal their section 230 protections and then put the top FB/Meta brass in jail for fraud. It would never happen but imposing this kind of accountability will definitely make the power drunk tech elites think twice.
"mega-corp" is a term from dystopian cyberpunk fiction people just started applying to real life. "mega-corps" aren't a real thing (at least not until they become semi-autonomous states with their own currencies and private armies,) nor is "mega-corp" well enough defined that one can use it as a means to remove free speech and association rights. If those rights even worked that way, which they don't. Nor should they.
You may as well just say "any company I don't like," it means practically the same thing.
In the 1960s, certain restaurants and other establishments argued that they were private, and could choose who was allowed to enter and do business there.
They were making the same mistake you are - arguing that they were not obligated to be even handed and non-discriminatory in the services they offered.
Oddly enough, your comment appears word for word in the HN guidelines:
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
What makes you think annoyingnoob didn't read the article? The MP in the article, Mark Strahl, is quoted referring to this as "a glaring example of censorship", and annoyingnoob appears to be saying that "censorship" is not the right framework in which to criticize Facebook's actions.
One does not exclude the other. FB tried to censor him, but turned out it wasn't what they wanted to censor, so they removed the censorship. That happened because their censorship mechanisms are highly automated and very badly trained, so they have a huge amount of false positives.
The comment you replied to seems like a direct reply to the article to me, which starts with the first quote below and goes on to quote the politician saying the second one ....
> If you're tired of censorship, cancel culture, and the erosion of civil liberties subscribe to Reclaim The Net.
> this is a glaring example of censorship and overreach by tech giant companies who control so much of the online space.”
> If Facebook reversed the decision, it's not censorship.
Using a different example, I wanted to eat in a local restaurant [a privately owned company providing a public service entirely on their own terms] but I was refused entry because I am black and their "policies" [which is the reason I am given] prohibit entry to black people. I call attention to the situation by standing outside with a placard saying this restaurant treated me in a discriminatory way because I am black. The restaurant manager then comes to me and says, sorry that was a mistake caused by the 'patron classifier' [i.e. some vague entity that implies no actual person is at fault anywhere], and I can come in now.
So, do I understand correctly that you would say the restaurant is not operating any type of discriminatory policy because it reversed the decision when attention was publicly drawn to it?
To me, that reasoning does not seem to make sense. Being embarrassed into changing a decision does not stop your original decision being wrong in the first place. It simply means you want to stop attention being drawn to it and hope I will stop making a fuss if you make an exception for me on this occasion. You can continue making that same [wrong] decision for everyone else that is unable to complain and draw attention to their plight.