First, it's not about private files, it's about distributing content.
Google isn't spying on your private files, but does scan them when you share them publicly. E.g. keep all the pirated movies you want on your Drive, and even give private access to friends, but the moment you make them publicly viewable Google scans them and limits access accordingly. So no, this isn't applying to your private diary or privately shared documents.
And second, to those who claim absolute free speech with no limits -- notice that the two main categories here are related to democracy and health. All our legal protections ultimately depend on a democratic foundation -- undo that with misinformation and you don't have anything anymore. Similarly, your rights don't matter much if you're dead. Companies aren't allowed to advertise rat poison as medicine and neither are you.
There's something fundamentally flawed about the idea that censorship in the name of preventing misinformation is protecting the foundation of democracy.
You cannot have true democracy if people cannot disagree with their governments, they must be able to disagree with any truth or opinion such a government might consider self-evident, just on the off chance they're right.
I should at this point note that Google doesn't directly claim to go quite that far in preventing misinformation, they mostly claim to disallow things that could harm the democractic process (e.g. telling people to vote at the wrong place, their candidate has died, etc.). At least that kind of information is usually agreed upon (if not there are bigger problems than mere misinformation), though they seem to try to include claims of voter-fraud, which is a bit dangerous.
Imagine if Britain had this same technology when the USA was founded... It of course would have quickly cracked down on communications and it would have done so in the name of "peace" and "what's right"...
This idea that thinking critically of a government and even believing that perhaps the government as it stands today is not the government "of and for the people" (sure could be interpreted as anti-democracy by that same corrupt government)... And maybe that's not correct, but who is the government to say that we can or cannot challenge them in public discourse as it is supposedly protected under the first amendment?
This is indeed an insanely slippery slope and people willing to trade their freedoms because they think it's for the ultimate good, I think are really making a mistake... it's not difficult to understand that this is one of the first steps of an actual fundamentally corrupt government... This is easily open to abuse and vast interpretation.
If the American revolution had happened 100 years later it would have looked like the Boer war. That's the war where the British used barbed wire and the machine gun to invent the concentration camp.
> If the American revolution had happened 100 years later it would have looked like the Boer war.
Smart. It would have. That is indeed how you fight an insurgency.
Not sure how new it really is though. Compare and contrast to the Harrying of the North.
> to invent the concentration camp
I hear they borrowed the idea from the Spanish in Cuba.
Compared to alternatives, it was supposed to be more humane: "Get the civilians out of harm's way (and keep them from aiding the guerillas without killing them)".
It wasn't until the Nazi extermination camps of WW2 that the phrase "concentration camp" took on such negative valences.
Granted people did die in concentration camps, but the deaths were of things like cholera -- the same diseases that would run through army camps. Arguably we still have them today, under the new name "refugee camp".
> the Boer war
It shouldn't be left out that the British had a decent case for moral high ground in that war. The Boers were religious hyper-conservatives who believed God had given them a right to Black slaves, whereas the British had banned slavery throughout the Empire and were going to some expense to enforce this.
The bigger problem that I have with the idea that misinformation kills democracy is that it seems to suggest that misinformation is some new phenomenon or that the average person has been well informed throughout the history of western democracy.
Democracy thrived before the printing press. Democracy survived the invention of the printing press, which was mostly in the hands of magnates who could afford it. Democracy survived the invention of television and radio, which was (and still is) in the hands of a select few magnates. We build up terms like "journalistic integrity" and look at the past with rose colored glasses as if these mediums delivered pure objective truth.
If anything, what we're seeing with the internet is a more true democracy with a wider range of opinions, less controlled by small groups of plutocrats. If you don't like to see the death of that plutocracy, or you're happy to see a new group of benevolent plutocrats come in to retake control the narrative, I hate to be the one to tell you this, but you don't really like democracy.
> suggest that misinformation is some new phenomenon
Misinformation in this shape and form is a new phenomenon. And it is not just the scale;
- the number of agents that push their version of misinformation is at least an order of magnitude higher than ever, depending on the particular topic. So-called culture wars have so many different sides.
- technology not only scales misinformation, but it also accelerates it. The objective function of "increased engagement" meshes very well. Hard to grok, full fidelity facts don't get shared or recommended as much as rage-baiting or bias-confirming material.
- technology can on-the-fly piece together material to conform to whatever bullshit you want to hear, I want to hear or the other guy wants to hear. As it is optimized to increase engagement, it can efficiently generate personalized micro-narratives, which is ultimately a reflection of our personal biases.
The problems is it gets harder and harder for these narratives to converge. More on that below.
> If anything, what we're seeing with the internet is a more true democracy with a wider range of opinions, less controlled by small groups of plutocrats
As mentioned, original thoughts don't have the same propagation speed or reach as junk-infotainment, and you're just as subject to the narrative-shaping powers of those "plutocrats" as ever. They just blend in better.
But the larger issue is that you can't equivocate mere plurality with a functioning democracy. Ultimately there is a single reality, and even though we are in divergent positions due to having different entry points and framings, we should be - however little - converging in our narratives and understanding of that reality as time progresses.
But the opposite seems to be happening, we are getting dumber at scale, stuff makes less sense, institutional mistrust is at all-time-high. I am not putting this all on tech, but it certainly pours fuel on the fire of meaning-making crisis.
I wouldn't take it for granted that we could survive this without it creating a larger crisis first.
I want to point out that the biggest reason for this is that those institutions are worthy of mistrust, people just weren't aware of the need to mistrust them. The news media for instance has been gradually getting worse due to having to compete with internet sources and it was plenty shit back in the 60s and before...
Then there's things like the replication crisis damaging our trust in science-as-an-institution and the mask flip flopping damaging trust in science-as-communicated-by-prestigious-bodies.
And of course there's always departments attempting o justify their existence even when it makes everyone's lives harder for little gain (yes, I work for a mega Corp, how did you guess?).
Perhaps, then, we should aim our sights on the institutions and systems that incentivize and profit from the spread of misinformation rather than trying to treat the symptoms and censor misinformation outright. I’m always baffled in these free speech debates of the last 15-10 years why the end up reducing the landscape to a false dichotomy.
What I’m seeing is a disintegration of the narrative, with a relatively small group of disinformation plutocrats bombarding minds at scale with conflicting positions.
A compassionate view of humanity would say that humans are basically accepting. This openness can then be abused by viral misinformation. We could take the view that humans should just be self protecting and if they got duped that’s on them. But IMO that’s a depressing view of the world, and tends toward something like mutually assured social destruction in the limit. We need to protect our shared narrative.
Also personally I find the view that “democracy prevailed before, it’ll continue to prevail somehow” deeply unsatisfying. Democracy is not built into nature. It has to be proactively maintained and refreshed.
Democracy is struggling with misinformation because Governments are losing credibility with citizens, this is not my problem as a regular person.
I'm sick of hearing that it's up to "people" to be fed the right information so they behave the way that works for the Government and large corporations.
Why should I trust the WHO? Why should I trust the FDA? Why would I trust Johnson and Johnson? Why should I trust Pfizer? Why doesn't the Government fund an emergency trial on Ivermectin?
Do you know that these pharmaceutical companies cannot be sued if there is a problem with my health related to the COVID-19 vaccines? Why would I trust a system like that? Why wouldn't I be skeptical and why wouldn't fringe theories appear?
I don't want information to be hidden from me to influence my beliefs, I want all the information made possible to me so I can make up my own mind.
As Edward Snowden says, the worse conspiracies are in plain sight. I'm starting to think he might be right.
> Why doesn't the Government fund an emergency trial on Ivermectin?
Because there's no evidence that it would work? Nobody has explained how it's supposed to act on COVID-19, when it's a nerve poison for invertebrates.
There's not infinite time and effort available for patiently trying every theory with no sound basis, and the worst thing is that there have been small trials with poor or inconclusive results which people ignore. https://sciencebasedmedicine.org/ivermectin-is-the-new-hydro...
>Calling ivermectin the same as HCQ is a strawman approach.
Actually, this is the perfect example, as the main compound in HCQ has been proven in several studies unrelated to COVID19, to specifically prevent the 'jelly-lungs' that people with severe critical-case COVID19 died from in high numbers.
You can be skeptical about the vaccine, and fringe theories are normal and some are legitimate. It’s fully normal to be skeptical because you don’t have all the facts. But look at what Tucker Carlson is doing, as one example. He asserts outright falsehoods cynically. Take the wind turbine debacle in Texas for example. He asserted lies that frozen turbines caused the blackouts with implications that wind tech/greener tech is evil. Governor Abbot came on and added his authority to the disinformation. People believed it and got outraged. I watch political Twitter a lot and I can literally see how a cynical lie begins, produces outrage, and goes viral. What’s your opinion on that phenomenon? It seems that has something to do with governments losing credibility.
There's the saying: "don't throw out the baby with the bathwater". What you are saying is true, but we need to hear the voices that would otherwise be unfairly silenced as well.
There's a difference between "skepticism" and raving fucking loonery, and 99.99% of what's proliferating online nowadays is the latter. Shit like that also transforms democracies into dictatorships, at least as much as "blind trust" does.
i hope your getting paid to do so cause thats a full time job. we have to trust some. the gate keepers can be rife with corruption and other influences. most do not have time to validate most things is mostly what i am saying.
there needs to be a middle of control and freedom without the bullshit.
> Also personally I find the view that “democracy prevailed before, it’ll continue to prevail somehow” deeply unsatisfying.
As somebody from a Western country that has gone from democracy -> dictatorship -> democracy within the last 100 years, I couldn't agree more (I'm German). Democracy is fragile and has been under increasing threat for the last 15 years or so, unfortunately partly accelerated by social media.
If we can't agree that democracy is the foundation for freedom of speech, and that there are actors (foreign and domestic) using misinformation to erode trust in democracy, I'm bearish for the future of democracy (at least in the US).
> If we can't agree that democracy is the foundation for freedom of speech
We won't because freedom of speech is the foundation for a healthy democracy, not the other way round. Socrates was executed at the behest of a tyrannical democracy - the original democracy - for his speech.
How do you propose to achieve freedom of speech, in order to have a democracy on top? What prior-to-democracy system would lead toward that free speech? Or do you believe free speech exists by itself in nature, prior to any system?
Correct me if I'm wrong, you seem to be saying that free speech can't exist without democracy.
Firstly, free speech is not a synonym for democracy, we can clearly see there are democracies with differing levels of free speech, some with very low levels.
Secondly, most of the other systems of political organisation are antithetical to free speech. That does not mean, however, that democracy is a necessary pre-requisite for free speech. I can imagine other systems - as have others - but that they do not exist is either because they cannot or because the conditions for them to hold are not there yet.
Finally
> Or do you believe free speech exists by itself in nature, prior to any system?
I'm not a student and this is not a student debating society, please make your points in a way that an adult and professional would expect and can respect. I don't know you, it's far too early for you to take the piss. Try and get into a conversation first, at the very least.
> Correct me if I'm wrong, you seem to be saying that free speech can't exist without democracy.
No, parent is saying freedom of speech is a consequence of democracy, not that one wouldn't exist without the other or that they are synonymous.
One can, as you have, construct and imagine all kinds of political systems where one exists without the other (e.g. Greece 400 BC, benevolent dictator etc). This leads to rather contrived arguments that miss the point at hand: Freedom of speech has almost always been a consequence of modern democracy.
Your flippant response in the last paragraph makes me believe you aren't convinced in your argumentation either.
Do you have examples of democracies before the printing press besides Athens and arguably the late Roman Republic? Those happened nearly two millenniums before the press and didn't last that long. "Thriving"?
Europe had them in the middle ages too: Italy's seafaring republics (Venice, Genoa, Amalfi, Pisa et al) where (nominal) democracies, practically owned the Mediterranean sea and lasted a thousand years.
The vast majority of Slavic tribes 0-1100AD were effective democracies...even the majority of Polish kings post-Christian-indoctrination (1000-1800AD) featured an elected monarchy.
There isn't really. You're adopting, I assume, J.S. Mill's view, that the cure for bad speech is more speech, which he famously published in 1859.
However, since then it's been widely accepted that when speech reaches a certain level of harm then the greater good is to prevent/punish it. You can't incite violence under the guise of free speech. You can't advertise that something is safe when it's not. This is because more speech can't undo violence and death after it occurs.
And when it comes to misinformation with regards to provable and intentional lies about voting procedures, election results, etc. that falsely harm the country's institutions and legitimacy, it's entirely consistent for that to fall under the widely-accepted prohibition of speech that rises to a certain threshold of harm. It directly leads to mobs, riots, and revolution based on lies, not based on actual injustices.
This doesn't mean any harmful speech is prohibited -- that's ridiculous. You're generally allowed to insult people, tell lies, etc. But there's a threshold of harm that gets established.
Annoying that you are getting down voted for what seems to be a very reasonable comment.
I have very few friends “in tech” and this is the view that basically all of them hold, this is the view that most of my family hold. Across the 100-150 people that spans the full (European) political spectrum and many different backgrounds and life experiences from growing up extremely wealthy, finding wealth through hard work (and luck) and success and borderline surviving - do not “SV tech circles”.
Basically it’s a view point that is able to accept nuance and grey. People who work in absolutes dominate headlines so it’s all we hear, in reality the majourity of people live in the middle.
I don’t know about you, but any social group I’ve been part of has had its boundaries beyond which some speech and behavior becomes unacceptable and has consequences. What kind of social groups are you part of where all speech, including lies, insults, smearing, is accepted?
The united states is basically unique in the strength of it's speech protection, and even that is only the government. I'd challenge you to name a social group, any social group, that wouldn't ostracize you for saying certain things.
Freedom of speech is not violated by ostracism, because freedom of speech (as a right) is the right to say what one wants, to whom one wants, at a time of one's choosing and the implied listener's freedom - to listen to whom one wants (or not) at a time of one's choosing (and hence to what one wants).
If someone is ostracised that is an example of the listener exercising their free speech rights, not a violation of them.
Some use this as an example of why companies like Twitter and Facebook may remove someone from their platform, but I would argue firstly that they are a monopoly exercising monopoly power, hence violating one's rights to free speech, and that they are not platforms any more because of their interventionism in the speech of others (and currently have too much protection under law to do this). The monopoly power is the most important thing to challenge, in my view.
To the contrary, many Silicon Valley elites trend far more libertarian than the average population, so it's actually the opposite (depending on which elites you're talking about, they're not monolithic).
And it absolutely is widely accepted if you look at democratically produced law across the world. Restrictions on harmful speech exist literally everywhere in democracies. It's what people vote for. It's absolutely widely accepted.
> And it absolutely is widely accepted if you look at democratically produced law across the world.
Widely believed would be a better phrasing, as you may compare anywhere with low to no support for free speech and they will be more oppressive and violent than anywhere in say Europe or North America that has greater support for free speech, the correlation will be huge.
Then you could compare somewhere like the US with somewhere like France or Germany and ask if there is a greater amount of violence in either and see what could be attributed to speech. I doubt you'll be able to produce strong enough evidence for your position to then claim it as widely accepted over widely believed or just advocated for by parties that benefit from less freedom of speech.
> Widely believed would be a better phrasing, as you may compare anywhere with low to no support for free speech and they will be more oppressive and violent than anywhere in say Europe or North America that has greater support for free speech, the correlation will be huge.
Europe is one place with more limitations on "free" speech, and pretty much no parties or political grass-roots movements here are campaigning for any radical changes towards an American-style legislation. (Right-wing populist parties and movements in several countries are complaining about hate-speech legislation going too far -- and may even have a point -- but AIUI not even that means they're against the main idea that it's right to ban harmful speech; they just disagree on the definition and limits of what's "harmful" with regards to the kind of speech they want to indulge in.)
> Then you could compare somewhere like the US with somewhere like France or Germany and ask if there is a greater amount of violence in either and see what could be attributed to speech.
Crazy American lies freely spread under the guise of "Free speech!" gave you January 6.
It's clear that a handful of genocides were caused in large part by hate speech, such as the Rwandan genocide and the Holocaust.
What's not clear to me (although I'm open either way) is whether strict hate speech laws would've reduced the odds of these happening. Do we have reason to think that to be true?
The first order effect is to chill that kind of speech. But is there a second order effect of making these people into martyrs and fostering resentment towards the protected group that does more harm than good?
My understanding is that pre-Nazi Germany had hate speech laws, and it didn't seem to work there?
Giving anyone the ability to arbitrate what is "good" speech vs "bad" speech is way too much power. In any era of history there have always been "truths" that were massively popular and eventually overturned. I don't think we are the first era to be an exception. So when you're talking about punishing "bad" speech you are talking about creating super powerful entities just because they agree with you. That intent scares me far more than whatever nonsense you get from q anon or antivaxxers.
> Giving anyone the ability to arbitrate what is "good" speech vs "bad" speech is way too much power.
But that isn't at all what is happening here. Google has decided that they don't want to enable people to distribute certain data using their platform. They're not being crowned the omnipotent oracle of good and bad.
> you are talking about creating super powerful entities just because they agree with you.
This position is bizarre to me -- what do you think an elected government is? I vote to create "super powerful entities that agree with me" every 4 years. Those entities possess the power to destroy all life on earth. Google is not anywhere near as powerful as those entities, and while it is not (directly) democratically accountable, it does derive its power from its users.
No information will be permanently erased just because Google does not spend money and time making it available on Drive.
The amount of power google has, they can crown the winner. We can't pretend that "oh thats just google's opinion" when they control access to humanities collective knowledge.
My argument is very humble. Everyone gets to talk, whoever is most convincing gets listened to most. I don't need a paternalistic company protecting me from bad thoughts.
> This position is bizarre to me -- what do you think an elected government is?
A system with an intentional freedom to allow dissent?
> My argument is very humble. Everyone gets to talk, whoever is most convincing gets listened to most. I don't need a paternalistic company protecting me from bad thoughts.
You're talking now, and this isn't (AFAIK) on Google Drive.
You want Drive to be a democratically guaranteed national resource, nationalize Google.
They are not protecting you from bad thoughts. They are protecting themselves from lawsuits.
Moderating all this content is not free. If google was not under risk of losing money by not doing this, they would have done nothing.
It's easy to think that the most powerful entity is the one with the biggest stick. Consider this though: imagine a VIP and their bodyguard. The bodyguard is much stronger and carries a gun, but he is not the one with power. The VIP can replace the bodyguard at will.
An entity that gets to tell millions exactly how to vote and which nuke-wielding bodyguard to hire is surely more powerful than said bodyguard.
The government has an explicit limitation on excersizing that power. The first amendment is, well, first, its not exactly forgettable. The primary law of our land is "government can't fuck around with free speech". I see no ambiguity here. There's a huge difference between shouting fire in a theater vs saying, are the vaccines safe. We can have our disagreements but its in nobody's interest to outlaw them.
Well, no, the first article of amendment (usually abbreviated “first amendment”) is the eighth article of the Constitution, following the seven original articles. “Amendment” is revision/change.
Well thank you for your lawyering, but what's your point? The people that invented this country thought it was very important that people can speak freely (including you!), and you're like "nah"?
That’s not a reasonable position. In the limit that implies that you treat everyone as the enemy. Isolate in your own protected world, because you might get duped or harmed any minute now. That to me is a shitty world I don’t want to live in. I’d rather reasonably trust people and have a common baseline and shared narrative.
>You cannot have true democracy if people cannot disagree with their governments, they must be able to disagree with any truth or opinion such a government might consider self-evident, just on the off chance they're right.
Google is not the government. It's a private company owned by private citizens who also have the same constitutional rights. You're not being 'censored'. It's not a violation of your free speech. You're free to petition the government and you're free to just host your files someplace other than Google Drive. Access to Google Drive is not a 'right'. Quit trying to conflate the two. It's a disingenuous argument meant to confuse the issue and push a personal narrative.
> The scale at which Google operates today, anything short of defining some of its popular products as public utilities would be disingenuous.
Look at that from the perspective of a startup founder. You've made a thing. Spent decades working on it. It became popular because many people found it useful. So useful in fact, that they've decided that it would be nice if you didn't own it anymore, and they owned it instead.
Laws should be the same for everyone. If you're not okay with other people deciding that they own your stuff, don't tell others what to do with the things they've created.
By this argument China's censorship of online messaging platforms doesn't exist either, because Weixin and Weibo are operated by private companies, so what they're doing when they block messages with undesirable content isn't censorship.
I'm not even close to convinced by your response. Relying on the public-private divide as the sole basis for your retort is weak. You also assert that the person is pushing a personal narrative, but I suggest you're doing the same.
There's an argument that private corporations that are involved in dissemination of information (search engines and social media) should respect principles of freedom of speech as a democratic principle, regardless of constitutional mandate.
Suppose the government outsources welfare eligibility decision-making to a private company. Does this mean traditional notions of fairness we would expect from such an decision-maker do not apply, because they are a private company?
The public/private divide is a well-known conundrum. And the analogy I gave is a practical example of that, one that has actually faced several nations. I note you've offered no basis for it being 'dishonest', either, which is unfair. Be careful before making statements like that.
The point is, you look at the substance of what is being done or controlled, not the status of the actor as a private or public entity. That is what the analogy is used to explain.
The substance of what is being done, here, is regulation of communication between individuals over a communication platform. Downplaying it as 'not hosting public content' is inaccurate or at least of no moment. What's public content mean anyway?
If a significant amount of private communication goes through privately owned channels, it is reasonable that the private companies operating those channels respect democratic norms. It's unreasonable to dismiss any criticism as 'they are a private company', as that's beside the point.
I’m definitely on the side of the argument that says power is power, and private companies can do just as much harm as governments can. However, there’s a difference between gov’t censorship and censorship by private companies like we’re talking about here.
It’s the difference between not allowing the government to say what you can publish and requiring a company to publish whatever you want.
So then, I should be able to paint your house and car with whatever messages "I" want and you should not be allowed to erase them. Is that the argument you're making?
If "my" house is actually a meeting place for half the country, or even just half the city, and I've decided almost anyone can paint on the walls, I should not be able to say that one specific person cannot paint on the walls because, for instance, I don't like what party they vote for. Any restriction on wallpainting in that space must be independent of the content of the message. If I wanted to limit that message, I should have to go through the proper democratic circles, ie. pass a city ordnance.
This is the same logic that prevents me from saying that in this meeting hall, gay people are not allowed entry.
I disagree wholeheartedly. If your front yard happens to become a popular meeting space for the town, it doesn't change your rights to your yard. You can still ask anyone to leave for any reason. Google literally built all their infrastructure from the ground-up. It's theirs in the most direct sense of the word, and we should take an attitude of humble gratitude for their ongoing contributions to our wellbeing, rather then continue our attempts to punish them for their success.
You've conflated a few things here. Are we banning people or messages? You've said that bans need to be content neutral, but also that they can't be based on the individual doing the painting. This would mean that, for example porn would need to be acceptable. Or advertisements, but it seems reasonable for the owner to ban both of those things in pursuit of their desired aesthetic.
The actual case seems to be that anyone can paint anything except certain banned things. But any person can still paint other unbanned things.
That's different than banning particular people. In other words, if the republicans are allowed to discuss everything except vaccine conspiracies, you aren't discriminating against republicans, so this analogy about banning individuals doesn't work. And of course you might ban a particular individual from the premises for repeatedly breaking those rules.
All of this seems perfectly reasonable, and indeed I know real-world spaces that operate more or less like this.
> This would mean that, for example porn would need to be acceptable. Or advertisements, but it seems reasonable for the owner to ban both of those things in pursuit of their desired aesthetic.
Yes, I stand by this. If it's legal to have porn on your house wall, it should be legal for people to paint porn on your communal wall.
Again, the solution to this should be a city ordnance. The problem to me is not restrictions but accountability.
> The actual case seems to be that anyone can paint anything except certain banned things. But any person can still paint other unbanned things.
This seems akin to saying that the theocracy does not discriminate against gay men, because they can marry women just like hetero men can.
> Yes, I stand by this. If it's legal to have porn on your house wall, it should be legal for people to paint porn on your communal wall
Why? Why is it that if I allow people to paint things, I lose the right to moderate those things? Like it's still my property, right? What causes me to forfeit my property rights?
Or to ask a perhaps different question, could I close the venue entirely?
What if I later reopened it with a list of allowed people and you could only enter if you were on the list? Do I still forfeit those rights? How big does the list have to be for it to be suitably public again?
> This seems akin to saying that the theocracy does not discriminate against gay men, because they can marry women just like hetero men can.
You're going to have to explain this better. Because in practice banning gay men from marrying men prevents them from getting married at all. Preventing anyone from painting porn doesn't prevent an artist from painting not-porn. I might be more willing to agree if, for example, it was the government blanket banning porn. But we're not talking about that, were talking about one dude with one popular artists venue banning pornographic art being painted. It's not different than if I disallowed the sale of pornography in my art gallery.
Keep in mind, today, in the united states no priest is compelled to officiate a same sex wedding. The state recognizes them, but you or I don't have to.
> Why? Why is it that if I allow people to paint things, I lose the right to moderate those things? Like it's still my property, right? What causes me to forfeit my property rights?
Good question! In my view, the deciding factor is "universality". I think there is a fundamental difference in nature between a friendgroup and a customer base. When you offer a service to your friends, you may pick and choose how you like on any basis. When you offer a service to the general public, you are in a sense attempting to provide a "plug-in" service to society as a whole, and so the terms of that service should be negotiated with society as a whole, including such things as civil rights. This is exactly where you cross the boundary between being "a private citizen" and " part of the state".
> Or to ask a perhaps different question, could I close the venue entirely?
Yes. Nobody can be compelled to offer a service.
> What if I later reopened it with a list of allowed people and you could only enter if you were on the list? Do I still forfeit those rights? How big does the list have to be for it to be suitably public again?
I think this is a sliding scale. The specific cutoff would always be kind of arbitrary.
> > This seems akin to saying that the theocracy does not discriminate against gay men, because they can marry women just like hetero men can.
> You're going to have to explain this better. Because in practice banning gay men from marrying men prevents them from getting married at all.
No it does not; it merely prevents them from getting married in the way that they like, which is a different way than the societal norm. The right to hetero marriage, as practiced in theocratic societies, inherently normalizes hetero relations and excludes gay relations. However, there is nothing inherently wrong - in the erroneous sense, not the moral sense! - about such a choice. This demonstrates that the constraints you apply to a service, even if they only pertain to the nature of the service and not the persons the service is extended to, can still be discriminatory.
> Keep in mind, today, in the united states no priest is compelled to officiate a same sex wedding.
Likewise, inasmuch as weddings have societal relevance, I think they should be compelled to - or else not officiate any weddings at all.
> When you offer a service to the general public, you are in a sense attempting to provide a "plug-in" service to society as a whole, and so the terms of that service should be negotiated with society as a whole, including such things as civil rights.
Does this apply to all businesses that offer services? Keep in mind here that the first amendment, in addition to protecting our right to speech, protects our right to association. That is, our right to associate with the people, and only the people, we want to is a civil right that our constitution protects just as much as speech.
If I open a store and let people purchase things, I'm offering a service to the general public. But I'm certainly not "part of the state". One of the primary concerns about the state is that it (usually) has a monopoly on the things that it does, so that if it provides a service, it's the only provider of that service.
But "speech" isn't a service that one can monopolize. Preventing speech can be done via force, but "facilitating speech" isn't monopolizable. If someone won't let you do it, you can do it yourself or find it somewhere else.
> Yes. Nobody can be compelled to offer a service.
But you are compelling me to offer a service! I want to offer the service to paint anything except X. And you say no no! You are additionally compelled to offer the service to paint X. This by the way, gets far more complicated if, for example, my service is...baking cakes. If I offer a universal cake baking service, when can I refuse to bake a cake? Can I refuse all wedding cakes? Can I refuse all cakes above a certain size? Can I refuse all cakes in red? Can I only bake chocolate cakes? Can I refuse to bake cakes for people who have previously given me bad reviews?
> No it does not; it merely prevents them from getting married in the way that they like, which is a different way than the societal norm.
So let's make this concrete. Let's say I ban painting my name. I don't want people to paint it in my house. People can paint anything else, but not my name.
With the marriage example, we generally assume that people are attracted to a particular gender, and aren't really able to change that. Are you suggesting that, similarly, there are people who cannot find happiness without painting my name on my wall?
I mean if that's the case, why is it moral for me to ban them as long as I ban everyone else too? These particular people can't be happy either way.
With marriage, the issue is that you're essentially preventing some group from being able to openly mutually associate in the way that they want to. We can quibble on exactly how much of a freedom to associate or a human right that is, but it sure sounds like a lot more of one than your ability to write my name on my wall.
There's another argument by the way, which is that marriage is a service provided to two individuals, and that providing only heterosexual marriages discriminates based on attributes of those individuals, in exactly the same way as only marrying white people would be discriminatory. This same argument doesn't work for the example of banning speech.
Yes, I don't believe in an unrestricted right of business association.
(Neither does the US, when it comes to discrimination on protected categories.)
> If I open a store and let people purchase things, I'm offering a service to the general public. But I'm certainly not "part of the state". One of the primary concerns about the state is that it (usually) has a monopoly on the things that it does, so that if it provides a service, it's the only provider of that service.
> But "speech" isn't a service that one can monopolize. Preventing speech can be done via force, but "facilitating speech" isn't monopolizable.
Sure it is, by controlling the platform. In any case, I have a much more expansive view of monopoly as a spectrum. Network effects, for instance, can also contribute to a monopolizing service. In any case, I believe the primary reason why monopoly is a moral risk is because a monopoly prevents you from switching providers to escape a restrictive corporate environment. My approach is instead to outlaw restrictive corporate environments.
> > Yes. Nobody can be compelled to offer a service.
> But you are compelling me to offer a service!
No, you always have the choice to not offer the service at all. I am not compelling you to offer any specific service, I am preventing you from offering a service with certain restrictive parameters.
> And you say no no! You are additionally compelled to offer the service to paint X.
No, you are compelled to offer the service to paint X, contingent on your decision to offer the service at all. You always have the option to cease offering the service entirely. And you could, I guess, close your company whenever someone requests a service you don't like. However considering fees, that may be impractical.
> I mean if that's the case, why is it moral for me to ban them as long as I ban everyone else too? These particular people can't be happy either way.
I don't have an opinion on the morality of the matter. Or rather, I don't think my morality should affect the decision. That's why I have focused this conversation specifically on the mechanism by which the morality is arbitrated, which should be the same mechanism by which state decisions are arbitrated, ie. civil rights, representative democracy etc, inasmuch as the service is of the class of "service offering to the general public" shared with some state services.
The idea that companies shouldn't be given the right to business association because their civil rights are less important that the civil rights of others is a moral one.
Civil rights are always in conflict, and which ones you prioritize and how is a moral decision. You can't abdicate that responsibility.
Put another way, why does it violate civil rights to offer a service conditionally, but not to refuse to offer the service at all?
Or in the reverse, why is the government able to regulate my offering of a service conditionally, even though you seem to believe that them compelling me to offer the service in general is a violation of my rights?
Or yet another way: why do you believe that the right of association is less important than the right of speech?
Those are all ultimately moral or ethical questions.
Yes, sorry, I agree. These are all moral questions. My position can be generally summed up as "the less individual an organization is, the less weight its rights have." This is because I consider the individual as the ultimate purpose of society.
That is, the more individuals your organization serves, the more it becomes a "thing whose arbitration between individuals is of societal import". I believe that issues of societal import should be decided by democratic means, whereas issues of individual import are decided by personal choice. Between the two is a sliding scale.
I’ve come to the conclusion that there is probably no universally compelling answer to that. I think you either hold individual liberty sacred as a sort of deontological pillar, or you try to get clever about minmaxing this nebulous “society” thing I’m always hearing about, moving toward a shared utopia one “we should ban…” proposal at a time.
> why would it be bad if a government censors speech?
Its a step in the wrong direction. The next step is to forbid asking that question. Step 3 is war to get our rights back. 2 and 3 may take time. Say 6 years for 2 and 1000 years for 3.
1000 years of blaming the witches, the socialists, the unionists, the jew, the Muslim, the Chinese, the Russians and of course you!
If you were defending the governments right to censor speech you were useful in installing it. When the censorship is accepted by the masses you become someone known to publicly share an opinion about a forbidden topic. Presenting the question for others to answer the way you just did is much worse than having your own opinion.
If we allow people like that, before we know it, we have people questioning everything!
It does, governments can pass laws, big businesses can't. They may buy politicians to pass the laws for them but in the end they can't pass the laws themselves.
You really should be. Quite a few of the larger companies have been caught conspiring to manipulate wages and workplace mobility for their own gain, in ways that not only hurt every work seeking person in the affected area, but where also flat out illegal.
Then lets not ignore the fact that most countries had to make laws specifically to protect working parents. There are most likely entire warehouses filled with court cases over a business owner realizing that statistically hiring a non married replacement for the recently married employee will be better.
They wont do it out of racism, they wont do it out of religious fervor. However they will do whatever they can to make more money.
You should rwad more science fiction. From the 1960s to 80s-90s, maybe not so much nowadays. Probably because it isn't all that "far out there" any more.
Your comment was correctly downvoted and flagged because of the name-calling. You shouldn't do that, not only because it breaks the site guidelines (https://news.ycombinator.com/newsguidelines.html), but because it discredits the view you're arguing for.
What would you do to combat the deliberate misinformation campaign that is weakening the US and many other countries? While propaganda and polarization are not new, the speed, reach, and aggregation are only possible with modern communication.
An early tidbit that may have been lost with deplatforming was that Euro leaders also disagreed, citing that only the state could be trusted with that power. However, the US Constitution mostly prohibits that route and it's left to private companies to make up their own minds about what content they want to host.
It's a hard problem for democracy, the best countermeasures I know of are transparency and education, but those are mitigations at best, you can't really do much if a majority of people believe an untruth.
You could also elect me as your benevolent dictator, I'll be happy to bring the misinformation to an end, but the lie I'd tackle first would be that this has anything to do with democracy.
> you can't really do much if a majority of people believe an untruth
Treat the propaganda campaign like the war it is - and arrest those who are serving as foreign actors for treason.
The earlier people are finally willing to admit Russia and China are waging a war against the whole Western world, the better.
The big multipliers - #45, Alex Jones, Fox News, Newsmax, OANN - are publicly known, as are most of the people spreading the stuff on FB, and can be taken down. No need to resolve to a surveillance state, just public information is enough.
Their motivation is self preservation. Western democracies make it pretty clear that they do not support the existence of brutal dictatorships. Thus brutal dictatorships are motivated to act in ways that can weaken the western democracies.
For China? World domination for Han Chinese, nothing less. They are taking over entire countries by "debt relief" aka loan sharking, and leaders who don't want their "investments" simply have their assets taken away by force like the Philippines.
Both can only follow through with their plans if there is no united opposition against them. Europe is falling apart on a national level with nationalist, often Russia-backed anti-democrats spreading discontent and hate, and the US is collapsing in ethnic and class war.
But if you're going down that route then why should other western governments not treat the US as a threat too? There's extensive spying going on by the US against them and there's plenty of American propaganda too. Should Europe ban Hollywood movies? Arrest American businessmen for spying? Where does it end?
I'd abolish the worst purveyors of misinformation: Namely, all of the people who are trying to censor "misinformation" so that they cannot have their propaganda challenged by an empowered public that no longer needs gatekeepers.
> What would you do to combat the deliberate misinformation campaign that is weakening the US and many other countries?
Which one? Plenty of candidates depending on which part of the political space your views lie in.
Is it the misinformation campaign that Trump won the last election? The one that says third world immigration is good? The one that says Democracy is good? The one that says COVID is harmless?
If there were some Oracle that could tell us what is true or false then this wouldn't be an issue in the first place.
If I can't explore new ideas, evaluate information and make up my own mind, no matter how much bullshit exists, I'm not living in a democratic free society anymore. I'm living in a technological dystopia, and authoritarian state.
I'm afraid, skepticism is part of a healthy democracy...no matter how much the government want me to get a jab.
When did Google feel the need to be the world police?
That’s a better question and moves the discussion forward.
Yes - and No.
To fairly answer your question, there’s different levels of sophistication to misinformation.
It ranges from vaccines will grow you another arm, to the Gates foundation is using this to impoverish nations. (I’m loosely using current examples)
So yes - experts can and will be able to avoid many of the misinformation attempts out there. The ones which they fall victim to - in their own area of expertise, are likely unavoidable.
For google to remove healthcare misinformation, they will be targeting the obvious content that has been flagged and likely supported by govts orgs.
I feel that many people are arguing about the case where the gatekeepers are wrong - which was an issue in a world where traffic through the gates was more orderly and the gatekeepers had historically been mistaken in stopping traffic.
Right now, even if I don’t like gatekeepers, I have no choice.
Someone has figured out to how to mass produce obviously and subtly malicious traffic.
The issue at this juncture in time is the lack of effective filtering.
Since we don’t have a better solution, we get forced to use gatekeeping.
This seems like a “throw the baby out with the bath water” situation in both directions: Censorship, like you note, can be used for evil to suppress anything you disagree with and drive your narrative. But unbridled amplified speech opens the possibility for destructive mass influence of minds, to be evil and destroy the good things those minds agree on.
So to me this is at least nuanced. Insisting on completely unchecked speech, however unscientific and destructive, is not a reasonable position.
But there's a difference between 'disagree' and 'state objectively false things'. The US media especially is poisoned with stuff like that - see all the lies around covid and mask wearing.
Do you feel the same way about free speech zones? Don't like what someone says, just force them to move out of the way so they end up protesting to an empty audience. Naturally if it's a big enough protest you may have to use violence (tear gas, riot police ect...) to do this but hey, they still get their right to free speech, so i guess it's all good and democratic.
There's something fundamentally flawed about trusting people to determine truths themselves as well. Doubly so when those spreading misinformation are well funded propagandists that have been attacking education for years. China, Russia, Germany, Italy, Cuba... these and other nations became communist/authoritarian with the ardent support of the people of those nations (enough people anyway). Tens of millions died as a result. People die today as a result. People are oppressed as a result.
> Companies aren't allowed to advertise rat poison as medicine and neither are you.
You may want a different example :).
> Warfarin first came into large-scale commercial use in 1948 as a rat poison. Warfarin was formally approved for human use by the US FDA to treat blood clots in 1954.
from the wording in the link[1], it seems even if you share the document with just one single person, and that person flags it, Google is then allowed to investigate. So, the pre-condition is not sharing publicly, just sharing.
> So no, this isn't applying to your private diary or privately shared documents.
Well this seems to be inaccurate based on the text cited below[1], do you have sources that back your claim? There is nothing saying that privately shared documents can't be reviewed. The only necessary condition seems to be just that someone flagged your content, which could be the one person you shared that content with.
[1] "After we are notified of a potential policy violation, we may review the content and take action, including restricting access to the content, removing the content, and limiting or terminating a user’s access to Google products".
> All our legal protections ultimately depend on a democratic foundation -- undo that with misinformation and you don't have anything anymore. Similarly, your rights don't matter much if you're dead.
This is an insane conclusion to draw, because it blithely ignores that it positions Google as an oracle of what's threatening to democracy and what isn't.
The examples in the current policy are fairly narrow, but this is a categorical line being crossed (for better or worse). Those who are concerned by the increasing encroachment of effective utilities on what can be communicated need to speak up when clear lines are crossed, because it's the only way to avoid frog-boiling.
It's funny to see people freak out about this when entire foundation of Google is built on the proposition that it can make effective judgments about the relative value of various content out there on the web.
Like, if Google has a hidden agenda here that makes it any more fundamentally compromised as a judge of disinformation than anything else, then whether or not it will be a free CDN for arbitrary content isn't remotely our biggest problem.
And if they're just that bad at sorting information from disinformation in spite of their considerable resources and ostensible value proposition, same thing goes, although that's an opportunity for someone else to the extent that there's a market for understanding reality.
So, yeah. Google by its nature is going to play a role regarding what's a threat to accurate understanding and democracy. It's not alone in this; journalism does it. The academy does it. The courts do it. Businesses do it.
The way you get a well-functioning society where robust discourse turns into better perception and refined ideas isn't that everybody takes a hands off approach, it's that everybody -- all institutions and individuals -- take responsibility.
Not to mention that requiring Google (or anyone) to carry and disseminate information that they consider irresponsible... well, compelled speech isn't exactly freedom of speech.
> Like, if Google has a hidden agenda here that makes it any more fundamentally compromised as a judge of disinformation than anything else, then whether or not it will be a free CDN for arbitrary content isn't remotely our biggest problem.
Indeed, the bias in Google Search is actually a very serious problem.
> It's funny to see people freak out about this when entire foundation of Google is built on the proposition that it can make effective judgments about the relative value of various content out there on the web.
You're thinking of Google Search, their web search product. The thread is talking about Google Drive, their file-storage and - sharing product. If you don't get the difference between these two products and the expectations around them, I don't know what to tell you.
The relevant differences between those two products are:
(a) Google is actually less uniquely powerful or threatening when it comes to the hosting segment
(b) Google has an even greater claim to moral and legal rights to say "yeah, we're gonna decide not to carry/disseminate certain content,", except to the extent that your position here involves the idea that compelled speech is the way forward.
The idea that _anyone_ can reliably enough decree what's misinformation is ludicrous. I acknowledge that there are vast masses of incredibly stupid people that need to pretend that Truth is handed down on clay tablets by God in order to function. We're much better off with the scientific establishment wearing this mantle than, say, religious institutions. And its pointless to try to convince these people that science is an iterative, incremental process that's based in skepticism, not certainty.
But the minority of society that understands and participates in the process of truth-formation (including scientists!) produces a widely disproportionate amount of epistemic value, and society depends on this process for basic functioning.
It's amazing to me that this isn't clear to everyone after the pandemic, of all things. The amount of claims that were banned from social media as "misinformation" that became expert consensus a couple of months later is mind-boggling. Following smart and quantitative people on Twitter was wayyyy more likely to provide you a healthy and safe pandemic experience than following the incoherent and self-contradictory public health recommendations (let alone policy). More important than this "direct-to-consumer" ability to discuss the pandemic is that experts themselves form their opinions through this type of discussion. The notion that there's a "someone else" who has reliably figured out which dissent is out of bounds is laughable.
I'll note again that Google's current policy is limited to fairly simple things, but it's an important Schelling fence being torn down and worthy of commenting on (and pushing back against, if yiu believe the trend is harmful).
A few years back people were getting messages from Google if their docs contained hate speech.
Google's response was that it was a corporate feature that was accidentally turned on for all accounts.
So they certainly have infrastructure to do deep content scans of all users docs. Realistically they probably do still scan all accounts for internal metrics, it's just notification that has been disabled.
I can't find anything with searching that Google ever algorithmically identified hate speech in Docs & Drive, there's no such corporate feature that can be enabled in G Suite control panel, reliably identifying hate speech is a hard problem that there's no indication of Google having solved, and honestly the entire thing sounds like an urban legend.
But if you have a reputable source I'd love to know.
> A Google spokesperson reached out via email with the following statement saying that the bug has been fixed: "This morning, we made a code push that incorrectly flagged a small percentage of Google Docs as abusive, which caused those documents to be automatically blocked. A fix is in place and all users should have full access to their docs. Protecting users from viruses, malware, and other abusive content is central to user safety. We apologize for the disruption and will put processes in place to prevent this from happening again."
I work at Google. I’m not involved with this change specifically, but suffice it to say there is far more advanced processing than simple hash checks for CSAM being run on every file uploaded to Drive. I would be surprised if all files weren’t subject to these checks, but whether or not Google will take action on them is another matter.
> undo that with misinformation and you don't have anything anymore.
That presumes that we have come from a period that was somehow free of misinformation. This is obviously false, and all we're doing is trading one corrupt system of control for another.
Democracy also demands that the burden of proof is on the accuser, don't you feel this same standard should apply to those, who of their own volition, take on the task of fighting this "misinformation?" Shouldn't those deprived have recourse?
> Companies aren't allowed to advertise rat poison as medicine and neither are you.
Advertising is always a commercial activity. If I'm merely sharing my opinion that rat poison, in some dose, might possibly serve as a cure for some particular ailment, how am I advertising? Isn't there a responsibility of the other end user to not accept medical advice from anonymous information published from a free document sharing service?
I'm not sure the trade offs you suggest are gaining us anything important.
You are assuming a professional is reading whatever the content is you are distributing and will make a rational, fair decision.
No.
It's going to be a minimum wage indentured Google servant that doesn't quite understand what they are reading but they have 17.5 seconds per case to make a decision. They will shoot first and ask questions later. What if the document is satire but they couldn't understand it? Oh well there goes one strike against your account, or maybe that's your third strike and now ALL your Google accounts are banned.
We already know what the appeals process is like. Unless you get it publicized on Hacker News et al, you won't get any chance to appeal.
Actually, this is not even about distribution exactly, but about the "Report Abuse" button: what this page lists are categories of things that, if someone with access to the file clicks on "Report Abuse", whoever is acting on those flags may decide is a violation. Note that the page says:
> After we are notified of a potential policy violation, we may review the content and take action…
So (1) it's not about Google proactively scanning all your files (even public ones: though I guess with sufficiently public files, sooner or later someone will click on "Report Abuse", perhaps even by accident), and (2) I imagine it could happen with files you shared with just your friend, if your "friend" decides to "Report Abuse".
(Disclaimer: I work at Google but not on Google Drive or anything related to these policies.)
“Democracy” and “health” are high level man-made abstractions, so they can hardly be placed at the root of anything, that's not just because of “free speech”. See how you didn't even try to use, say, “personal rights” and “personal life”; the panegyric for those who “know what's better for you” needs vague impersonal “democracy” and “health”.
Google has been doing anything it wants with people's data, news like these just mean they are lazily formalizing their power.
Who at Google do you trust to decide what information you and I are allowed to know? What is this person's qualifications?
How can they be held to account when they inevitably get it wrong?
Where will the highly-transparent write-ups detailing moderation decisions be published?
Seems like if Google actually gave a damn about the morality of censorship as some sort of 'neccessary evil' you'd be able to answer these questions easily^. Until then, it's a non-starter in my book.
This is the key problem with all censorship, however well-intentioned. A person is needed to censor. People make mistakes. Sometimes by accident, sometimes on purpose. There is a strong dis-incentive to having any transparency or accountability. If there were, you might be held liable for your mistakes and nobody wants that.
To add: Who at Google listens to dissent!? No one is allowed to say anything - there is sort of a chilling effect internally at Google.
It’s debateless policies that are spreading on the world stage. People need to rise up against a small group of individuals located in Menlo Park, CA who are demonstrably and utterly out of the touch with the rest of the world, but deciding how and what information flows. These people have no idea how agriculture works or how people live in Indonesia or what conflicts are going on in Namibia.
I don't trust Google to fully filter information for it's credibility ... so I don't automatically trust things shared on Google drive.
But I don't have to trust that Google won't suppress valid positions drive since there are many alternatives for sharing information beyond Google drive, which isn't meant to primarily host public content in any case.
First, since you've been derailing thing from one question another, I have to mention that a thing shared publicly from a Google drive account is no more accessible than a thing shared from a website that a person sets themselves, so Google drive accessibly is not particular answer to social network news filtering.
But on the topic of social network news filtering, anyone who uses a social network is implicitly consenting to that network's filtering of information.
Once upon a time, most people got their news from a single newspaper - well informed people might read several papers as well as newsmagazines but even this implied a lot of filtering. Those newspapers filtered the news more heavily than any present network.
>anyone who uses a social network is implicitly consenting to that network's filtering of information.
In what way does this make censorship the morally right thing to do? Think of all the evil large corporations have tried to justify with statements like that^ over the years.
"Our billions of users should have known we were gonna pull the wool over their eyes!"
I think the point here is that these are private businesses, and their platforms are private property. The liberty to choose what you do with your property overrides any responsibility to do the "morally right" thing, whatever that actually is.
I don’t want a cloud storage provider with only private storage. If I have a library of book files I want to share it with my spouse and if Google is trying to filter out misinform and not let me distribute it to my spouse, that’s bad.
I think we’ll have augmented intelligence through computing soon and imagine how horrific it will be if Google says “you can think misinfo, we just won’t let you think it?”
That’s bad. Storing, creating, and distributing don’t need limits like this.
Asimov’s three laws were possible and they still had issues. Imagine having a law for robots that they couldn’t speak what Google thought is misinfo.
Your message implies that you trust that Google will use this message in a clean way. They won't; they have a history of using algorithms to detect TOS abuses in a very Gung-ho way without any sort of functional appeal process.
I take rat poison as medicine--warfarin, just in much smaller doses than the rats get.
For that matter, the most toxic substance known to man, botulism toxin, is being promoted for various medical and cosmetic uses.
>> it's not about private files, it's about distributing content
The distinction between private files and distributed content is blurred-2-nonexistent, and Google Drive is not a neutral player in that process. The whole premise is that these are just sharing settings.
The premise of this service is that controlling your own files is passe, because content distribution, creation, consumption and need to work seamlessly.
Also, this isn't outside of the greater Alphabet complex. Even in social media, where they are a secondary player, they own Youtube... also premised on the sharing and distributing model.
I don't even use google, except I have a youtube login, and many friends I correspond with use gmail. In the present surveillance regime we can experience, through the medium of digital communications, the myth of the Evil Eye and the fear of being watched with bad intention.
> All our legal protections ultimately depend on a democratic foundation
is the wrong way round. I suspect you're speaking from a US perspective. The US constitution limits the rights of goverment for the benefit of the people, it does not limit the rights of the people for the benefit of government.
>Google isn't spying on your private files, but does scan them when you share them publicly.
What's more likely:
A) That they make a single-pass scan over a file for various purposes
B) They make multiple, separate scans over a file at different times depending on sharing status and other factors
I'd contend that it's more likely to be A, and that it almost certainly happens as soon as the data becomes visible to Google. I'd be very surprised if they didn't scan immediately to de-dupe and detect illegal pornography, for example. Once that's a given, it doesn't make much sense to do a separate kind of scanning later based on a different set of criteria; You scan once and flag for the respective detections immediately.
Mao in 1956: let a hundred flowers bloom ... Mao a year later: but watch out for the weeds. In those days the CPC apparently also has had a change in their 'terms and conditions'.
I don't know how anyone could continue using it after this.
I probably have dozens of docs and hundreds of research papers contradicting government health advice on diabetes and heart disease. These would fall under "Misleading content related to harmful health practices" since they promote a health theory which the government considers harmful.
However I would have cancelled regardless since the idea of automatic bans and/or content deletion based on ML models is crazy. They are obviously going to find a lot a false positives and I can't deal with the idea of trying to speak to google to explain that their algorithm mistakenly flagged my innocent content. In other words even if you are the perfect citizen, there is a chance you will get flagged anyway.
Yes, this is really bizarre. I get this kind of policy for some kinds of platforms, but not Drive, Docs, Sheets, Slides, and Forms. Those are my documents, and I should be free to put whatever I want in them.
What if I just like to collect and share old conspiracy theory stuff that I know is wrong? For whimsy, historical, whatever purposes...
This policy does not apply to private files. I just want to point out based on your comment that none of what you mentioned matters unless you share the document publicly.
Based on Google's wording[1], it seems you just need to share that document with one single person; if that person flags it, at that point Google is allowed to investigate even if the document is not public.
[1] "After we are notified of a potential policy violation, we may review the content and take action, including restricting access to the content, removing the content, and limiting or terminating a user’s access to Google products."
Looks like Google wants you to enter someone's email explicitly to share something with if you don't want this policy to kick in. I suppose even in theory there isn't a way to know the intent behind whether something is meant to be shared with just a few people or publicly if "share with anyone that has the link" option is used.
Yet. Once people get used to it, it will be extended to private files. Likely they will even build it into Android and create an API to report citizens storing questionable documents.
Some commenter replied it will change nothing. I disagree (but didn't downvote) - it will change the number of people in the market for a competitor. There are competitors out there and the people who are cancelling their Drive subscriptions here are going to support them financially, building a viable rival to Drive with their dollars. More competition is one of the best possible outcomes and I fully support it. Please cancel your Google subscriptions, folks!
This policy does not apply to private files. I just want to point out based on your comment that none of what you mentioned matters unless you share the document publicly.
"Misinformation" is just another word for "falsehood" or "untruth."
Those of you claiming that "democracy" depends on authorities preventing the spread of misinformation are ipso facto saying that democracy requires the government, or megacorporate cartels with a monopoly on public speech most likely acting as proxies for the government (as Psaki made clear is happening), to define what counts as "truth" (a Ministry Of Truth if you will) and to stamp out what they've defined as "false."
It's insane, and it's amazing to me how many of you have your heads so far up your assessment with partisanship that you can't see that the recent media hysteria over "misinformation" is a blatant example of the contrived "emergencies" that all totalitarian regimes in history have used to seize control over free societies.
> "Misinformation" is just another word for "falsehood" or "untruth."
That’s not sufficiently true. In fact, asserting untrue propositions is one of the easiest-to-counter ways of misinformation.
Real pros use humbuggery; of a set of n true propositions, pick a subset m to lead the audience to your conclusions and you haven’t even “lied”.
That’s why “fact checking” is such a popular way of narrative laundering, because truthiness of individual propositions alone never reveal if someone was bullshitting you.
That’s also why the courtroom maxim is “truth, nothing but the truth, and the whole truth”. Only those 3 properties in combination would exclude misinformation. (Not saying courtrooms necessarily live up to this maxim.)
I agree with the spirit of the rest of your argument.
> Real pros use humbuggery; of a set of n true propositions, pick a subset m to lead the audience to your conclusions and you haven’t even “lied”.
I've never heard the word humbuggery before, but I completely agree with the rest. Before social media we used to call that "choosing what to cover". It's also called a "lie of omission", so any censor who suppresses true information can reasonably be accused of lying (or misinformation) themselves.
As others have said, it's not new, but now, for the first time in US history, the media moguls are censoring not only their own broadcasts, but everyone's communications. Could America have ever developed as it has if the postal service or phone company had done that?
People rarely go and read the actual article so the headline must be accurate on its own or you misinform the public. Reversing or strongly altering the statement made in the headline in the actual article doesn't mean it is no longer misinformation, the damage is already done as the masses read the headline and now thinks it actually happened that way. Yet this seems to be completely acceptable even in most reputable news-sources.
The problem is not the existence or propagation of misinformation, disinformation or lies or truth. The problem is the attempt to control any of it by decree. You cannot trust anyone with the power to be the sole arbiter of truth. Everyone is human and everyone is fallible no matter how educated or credentialed. Democracy is the best of all the imperfect forms of government because it allows a plurality of opinions and convictions to exist and for everyone to freely choose among them. Governance can swing from one set of ideas to another peacefully and with the legitimacy that a majority have decided that things should be done a certain way for a limited amount of time after which we all re-evaluate the decision and can make changes if needed.
I think it's less about being factually wrong, and more about leading people to factually wrong conclusions with truthful statements.
Even just saying "X sells stock Y before event Z" imples that X knew about event Z and that it would affect the stock price of Y. People will read headlines like this and walk away assuming there was insider trading, but that may not be the case. Nothing in that example headline has to be false in order for it to spread falsehoods.
A lot of this type of misleading rhetoric often boils down to simply exploiting that many humans mistake correlation for causation and our education system really hasn’t don’t enough to hammer in not confusing those two.
Sure, that's a common technique, but would you really call it misinformation? I would call it misleading. Otherwise most of the financial press is misinformation and the word becomes kind of meaningless.
(I confess I replied in a knee-jerk reaction to your set-up, rather than your main point. Sorry for that. Your main point has meat on its bones and seems worthy of further discussion.)
It's actually worse - the "truth" necessarily implies that there is but one, and that everything else in false.
Does this sound familiar ? This is exactly what religious loonies say in order to take control.
Science necessarily involves keeping your own ignorance, epistemic and otherwise, in mind while dealing with things, but it's quite worrying that the West is going back on what was won with blood and sweat.
The lab leak theory was discarded as false and conspiracy thinking, now many experts believe this to be the case... What is a conspiracy one day can eventually be the truth, e.g. the Tonkin Incident.
I am not aure about the US but here in Germany most experts didn't say it was a conspiracy theory, but that those who claim this is the truth lack the data to back it up.
I could also argue that there is a invisible unicorn orbiting the solar aystem. As long as there is no real proof for it we have to accept that it is just a theory. And the more facts align with my theory the more motivation there should be to check my theory by trying to disproove it.
SARS-1/MERS was prooven to stem from bats in the same region, so assuming that instead of a lab leak theory was more in line with known/knowable facts than a lab leak theory. When the facts changes theories change, that is science.
Who wasn't allowed to discuss it and by whom? Again I am not from the US, but afaik it wasn't helpful that the "CHINA-VIRUS"-fraction of your political spectrum (which is quite frankly not known for their truth seeking behavior) were the first one who really wished this to be true.
Serious scientist have to remain open for all possibilities no matter who wants them to be true, but it really doesn't help if there is a irrational coloration to it including people being assaulted in the street because of how they look.
Societies with a calmer political climate can react calmer to things. Which is why the German Fauci equivalent explained when this theory first came up "sure that would be possible — it just doesn't seem plausible by our current information".
And the idea that "you are not allowed to say that" seems just a bit... weird to me given that this was a hot topic many people discussed. Scientists explained on public television why they don't deem it likely, that is pretty much opposite of "not being allowed to discuss it". It just happened that the side which believed it had no real evidence that could have been discussed, so at one point the discussion found its natural end without any new data. I remember the situation back then: People back then wanted this theory to be true really hard without any supporting facts. Sure it being true was an option, but people weren't calm about it, they were nearly desperate, as if the only way they can make sense of the pandemic was to blame it on someone act of mallice by some (evil) actor.
Maybe it is because I live in Germany but I am quite frankly allergic to this kind of behaviour. It has caused Genocides before (and probably will do so in the future). When you want some comforting truth or some story to be true so much, you stop caring whether it is actually true or likely you are lying to yourself. And when you lie to yourself just hard enough everybody is able to become a monster.
it waste allowed to be discussed publicly. it was labeled a racist fringe debunked conspiracy theory, and that sent all the signals you needed to chill scientists from using their platforms and expertise to gain attention to this idea. for example, when the head of the CDC mentioned it during a hearing he was ridiculed. based on largely opinions published in the cause social media companies to limit and censor information, which Twitter and Facebook did, for example the whistleblower that appeared on tucker Carlson's show.
of course there were a few brave scientists that spoke the truth of the viability and likelihood of the lab leak hypothesis, but that only proves how severe the suppression of discussion was.
people let your political opinions dictate what discourse is acceptable and which is not. if one imagines that your political opponents "wished it to be true" and seem to think of even labelling the virus from china as some sort of wrong think then one can be happy to pick and choose whichever truth you want to accept. scientists both hear and abroad where able to dismiss that lab leak is not plausible, based on lack of evidence. but the same thing could be said for the wilderness human contact. fauci said based on history lab leak is unlikely, in spite of the fact that lab leaks had occurred in the past.
I personally have no doubt that China s messaging was to suppress support for the lab leak, and they have succeeding in avoid any pressure to come forward with the truth about the actives of that lab in Wuhan . I also think that there is an entire field on scientific study that needs much more regulation and discipline, and they have also avoided any significant scrutiny.
I will await the discussion from mainstream virologists on tv and in medical and scientific journals demanding transparency and reform. but I will not hold my breath.
>I will await the discussion from mainstream virologists on tv and in medical and scientific journals demanding transparency and reform.
That exists. It's exactly what people mis-read as scientists supporting the lab leak hypothesis. Instead they support a more open attitude by China regarding investigations, which is neither evidence nor proof of either an artificial origin for the virus, nor for the lab leak theory.
Dude, come on. FOX and Trump were running with this theory as fact, using it as a pejorative against the Chinese government, and kicking the shit out of Asians across the country while we were in the grips of a pandemic. Instead of focusing on things like "What can I as a citizen/political leader do to help the situation", it was just more hate-mongering.
Yes, the truth is important, and if the virus came from a lab leak, it should be known and dealt with. FOX and company weren't journalists looking for answers. They were hatemongers giving their viewers an "other" to blame for something out of their control, while telling them at the same time that the virus was fake and masks are tyrannical.
No one said you couldn't discuss it. No one was jailing you for talking about it. Stop the fake oppression.
Who’s the hate monger here? Maybe some of the people you are referring to understood the simple relationship between location of the lab and the outbreak as being valid evidence as well. And you call them “hate-mongers” for thinking so.
To answer your question, not me. To clear my point up, I am refering to the people putting the "Wuhan Flu" as the headline for months, Trump and Fox and the alt-right. No matter how valid the possibility, the discussion wasn't being had in good faith. Like, who gives a shit while we're dealing with the problem? It's like arguing if your stove had a gas leak as your house is burning down, except not only are you having the conversation at the wrong time, you're encouraging your friends to go kill the stove-maker's friends.
Let's just be clear on this: The "opinion" section of Fox "news" is one of the most dangerous organs of communication in society today.
I don't know if you're intentionally misreading my post, or if I didn't make it clear enough - although I did clarify in a reply.
The hate-mongering is from the opinion shows that made themselves the heroic, oppressed "real-fact" people by constantly talking about it, by making it a conspiracy when it wasn't really, it just didn't matter at the time. Read my other reply, I'm not going to type it all out.
No, it isn't hate-mongering to wonder if the virus came from a lab. It's hate-mongering to play the victim, to scream "conspiracy!" as a strawman to millions of people, and have people going batshit crazy about it when it doesn't really matter. In other words, it's not the question that is the problem, it's the framing that Fox and Co. put around their narrative.
Don't even talk about "the left". It's not a monolith. I didn't talk about "conservatives" writ-large, my point is against the Fox op-ed cult - a very specific subset of the right. Please limit your assumptions as much as possible.
And really, it's makes zero sense to make the sweeping accusations against everyone on a side of a political spectrum. "The left" etc. is something to avoid. Much as I think the federal GOP is actively working against American values for the sake of its own power, I don't blame every conservative voter for their lack of options, nor do I think every Republican voter is literally Mitch McConnell.
What you are describing is just another manipulation of words by media that has caught you and others. Most scientists believe the virus had a lab in origin, and couldn’t have been natural, but whether that is due to a lab leak is completely speculation based on the fact it likely has a lab origin.
You need a cite for that, and some definition of 'scientist' that actually lends credibility.
"Couldn't have been natural" is especially a stretch, given that SARS and Bird Flu manifested naturally in the same part of the world a few years prior.
Oh it most likely did jump from animals to humans. Lab outbreak theory does not refer to the virus being artificial, but on that the jump occurred inside the lab due to bad measures and then leaked outside.
GP referred to the virus being unnatural, whereas there is very strong evidence that it is natural.
As to a natural virus being accidentally leaked from a lab, there's no evidence for that scenario, except for the fact that the first major SARS-CoV2 outbreak happened in the same city. For what it's worth, this is not as implausible a co-incidence as might be claimed, since it is not uncommon for a virology lab to study viruses endemic to the region it is located in.
"For what it's worth, this is not as implausible a co-incidence as might be claimed, since it is not uncommon for a virology lab to study viruses endemic to the region it is located in. "
The bat virus that seems to be the progenitor of SARS-Cov-2 was isolated in caves in a different part of China, quite far from Wuhan. Plus, bats hibernate during the time that the spread began.
It's estimated that ~50 years passed since the time bat virus (RaTG13) and SARS-CoV2 diverged.
So it's certainly not a recent transmission, which is why an intermediate host is proposed, which could also be bat, or some other host species. Coronaviruses are endemic in that region of China.
And while it is the sole arbiter of truth, the moment something occurs that truth starts decaying via entropy. Photons fly away at light speed never to be seen by us again. The energy that remains starts mixing in ways that cannot be reversed. You quickly lead to scenarios where more than one initial state could lead to the current state we can measure.
And worse we can never exist in a system where we capture and keep this information. You either alter the 'experiment' by measuring it, aka chaos theory. Or, you bring about the premature heat death of the universe.
There are facts and there is the context of the facts and the impact those facts have on people.
One can argue the news should just report the facts, but they add additional context and information to explain why the facts matter.
Verifying the facts / truth is objective and clear (e.g. it rained 2 inches today). Determining whether the impact is properly reported (e.g. “devastating” flooding occurred) is murky. And the flooding could have been devastating - to one family, to a village, to a school. So it’s not untrue, it’s just more subjective as you move from numbers to impact. And the news cares more about reporting impact than facts and will tailor the narrative to explain the impact to their audience.
Look at the news service all sides. You can figure out the facts (e.g. a law was passed) then see what each side is saying about the impact. The impact may be true for both sides, just presented in a vastly different way.
There is a lot of subtlety hidden in "a ^ ~a is a contradiction", because physical reality is more complex than it appears.
For example, one of the stunning consequences of Special Relativity is that there exist situations in which an observer says that event A happens before event B, and another observer says that event A happens after event B, and both are correct. Nature does not appear to be at all bothered by this "contradiction", however, and the world works just fine. Even more puzzling "contradictions" arise in quantum mechanics.
For all we know, it appears that reality is indeed dependent on the observer at a deep level. Maybe there is an even deeper level at which statements such as "a ^ ~a is false" hold, but so far nobody has been able to discover any.
I think the real difficulty is in human language. "a ^ ~a is a contradiction" is still perfectly applicable but apparently requires a "for observer A" clause. To supply all of the clauses necessary to make a completely unambiguous statement would be way too long to be humanly comprehensible. It makes me think of the Carl Sagan quote about needing to invent the universe before you can make anything "from scratch".
I'd argue that "event A happens before event B" is objectively true if and only if event A happens in event B's past light cone, and so there's no actual contradiction. The only weirdness you get is that if events A and B are space-like separated, none of the statements "event A happens before event B", "event A happens after event B", or "event A happens at the same time as event B" are objectively true. But you wouldn't say it's contradictory or paradoxical, just that it's a partial order rather than a total order.
You're simply misrepresenting what the actual statement with a truth value is in these cases. The fact that events happening close to each other temporally from fast-moving inertial frames can't be given a canonical temporal ordering doesn't undermine the existence of truth. A happens before B in the inertial reference frame of one observer and B happens before A in the inertial reference frame of a different observer are both true statements, and the converse of each is a false statement. The insight of special relativity is that there exists no God's eye reference frame independent of inertial reference frames. Nothing moves against some eternal static backdrop serving as a coordinate anchor. Things only move with respect to other moving things. That is in and of itself also a true statement, and the converse is false. Non-contradiction still holds everywhere. That you need further details and context to determine the truth of a statement doesn't mean it has no defined truth value.
The Special Relativity example you brought up is not a good example because it's about observation, not the truth of the order of the events.
Reality is not dependent on the observer, but we are observers, so that's why everyone thinks they have their own version of the truth. We are the weak link.
>Nature does not appear to be at all bothered by this "contradiction"
There is no contradiction. Our intuition for what 'A happens before B' means and implies is just bad/incomplete, as special relativity models.
>Even more puzzling "contradictions" arise in quantum mechanics.
I would bet a couple years of wage that what seems like contradictions will eventually be cleared up with some non-intuitive models, just like with special relativity.
I imagine the 'changes based on observer' problems of quantum mechanics will be more understandable once we decide what an observer is ( goddamnit people from physics, you don't add such a highly abstract variable to your model without giving it some good definition x( ), with some better experimental apparatus or with some deeper models of reality.
>Please then tell me what is the one true religion? :)
I dunno. I'm inclined towards none of the ones I know a little about being true since they really like to ask you to 'trust me bro, feel it in your heart' instead of just giving you good reasons to believe them.
Newton gave us far better arguments for universal gravitation than most people do for their religions, and he was ultimately wrong/incomplete.
>Humans are not logical systems.
Yes, that is a bug in the humanity system. Generally we try to diminish its effects when truth-judging (or maybe probable-truth-approximation-judging if you care about your epistemology). Recognizing this bug is useful to try and diminish its effect.
>That is maybe true with science, but not with living breathing things.
Are you saying living things have some irreducible complexity that's unexplainable or undetectable by scientific methods?
>Just one example: What is the best system to live in?
It's a pretty bad example. There's necessarily an answer which, if nothing else, is the best for the largest number of people. People aren't infinitely variable.
>No i said Humans are not logically describable systems.
Arguably they're equivalent statements.
>Having Slaves because it's an easier life for the larger population for example?
Yes. Just because you don't like it doesn't mean it couldn't be the "best". By the way, historically, slave-based economies needed the majority of the population to be slaves. It makes sense, since the slaves are consuming their own production and are expending more energy than the non-slaves.
>Try to mathematically proof that ;)
Humans don't grow arbitrarily large or small, nor do they grow arbitrary numbers of limbs, nor have arbitrary numbers of bones. A person chosen at random from anywhere in the world isn't equally likely to hold any opinion from the infinity of opinions they could conceive of. For example, I could confidently say no person has ever simultaneously believed that Google should be subject to more regulation and that the current pharaoh is a living deity.
If you post like this again we will ban you, regardless of how wrong another commenter is or you feel they are. We've had to warn you about this before. Please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here.
Nog sure about the intent of Google (and as they say, the road to hell is paved with good ones) but you’re reframing. It’s not a matter of censoring deviations from orthodoxy, rather one of removing disinformation and demonstrable falsehoods, often used for propaganda and to setup victim scenarios.
Those engaging in widespread censorship create disinformation and falsehood, by omission.
In history, it has always been those engaging in widespread censorship who turn out to be disastrously and/or maliciously wrong.
The science now being censored has become so well-established, that at this point, Google/YT et al, has and will delete and suppress the sharing of peer-reviewed science published in mainstream journals and indexed in PubMed.
That 100% ends their credibility. I deplore anyone expecting an explanation as to why.
Anyone yet standing by such incredulous, irresponsible and/or actively-malicious action, reveals themselves as same, for all to see.
Uh, I’ve read plenty about CIA techniques to overthrow undesired governments and a favorite trick is propping astroturfing campaigns claiming the wildest tripe and alt-truth against the soon-to-be “liberated”.
So please, come up with proof or yours is just another case of trumped up claims
I honestly can't make head nor tails of what you've said here, and am left wondering if you're a bot?
I made a statement, on-topic, about the nature and history of censorship, Google's credibility as arbiter of truth, and how naked they appear now as a result of how extreme they've become in that self-appointed role.
If you're not a bot, maybe try reading it again with that understanding?
In fact, they are and they do. (Ok, not you unless you run a pharm company). The best known rat poison is warfarin, an anticogulant used world wide as a medicine under various names.
As to your main point, how do you or others on SM, define misinformation? Do you believe the shadowy folk (qualifications unstated) who pontificate at FB, Twitter, Google and Wikipedia? That has to be absurd and shocking.
Anyone who wants to (and wants to seems to be the issue), can see the fatuity of this after searching for overturned consensus views as supported by the SM platforms mentioned.
The search should include the peer-reviewed literature.
A most recent volte-face relates to Covid origin, In April it was undoubtedly of natural origin as we were authoritatively informed by the Lancet. Now in July, the previously regarded conspiracy theory is taken seriously by people able to make a judgment. Who does one believe, the unknowns at SM or someone like Peter Palese, https://labs.icahn.mssm.edu/paleselab/) who was among 27 scientists who had earlier signed the Lancet letter denouncing as “conspiracy theories” the notion that the coronavirus could have escaped from a lab — or even be man-made. He now disavows that claim as do many others of similar status.
Absent a specific definition of "misinformation" from Google's lawyers -- who likely authored or at least reviewed this page -- we are left to consult the dictionary.
The dictionary defines "misinformation" as "information that is incorrect". There is no requirement of intent. The information may or may not be misleading.
That is, the term "misinformation" may apply to any incorrect information regardless of intent.
Is it possible to have incorrect information ("misinformation") that is not intended to or does mislead (~ "misleading information"). Your answer: ___
Is it possible to have correct information (~ "misinformation") that is intended to or does mislead ("misleading information"). Your answer: ___
Is it possible to have correct information (~ "misinformation") that is intended to or does mislead ("misleading information"). Your answer: ___
No need to answer this question. The technodemocratic complex has already answered it. It's tawdry, but the hunter biden laptop was "misinformation", because it was intended to mislead people away from Biden.
Doesn't matter that it was correct, the evidence was good, and it was published by major newspapers. The technical elite agreed with the political elite, and it was struck from the internet.
Amazingly, when it was raised at the presidential debate, real-time polling suggested that the majority of the populace had no idea what was being talked about. The suppression was effective.
>It's insane, and it's amazing to me how many of you have your heads so far up your ass that you can't see what is obvious and how are you so dumb bla bla bla
Clearly, since this is the most upvoted comment in the entire thread, you're not the only freethinker in an ocean of sheep.
I don't know, this lecturing tone is quite aggravating.
"Ministry of Truth" references are the opposite of illuminating; they're reflexive association of any attempt to arbitrate truth with totalitarian oppression.
The fact is there is no such thing as rule of law without lots of institutions and individuals (private and public) taking responsibility for arbitrating truth. And there's no freedom of speech without allowing them to do it.
This is not true at all. The rule of law, within the confines of the judicial system, does NOT define truth. The system weighs evidence and determines whether the preponderance of evidence indicates guilt beyond a reasonable doubt.
Someone could murder someone, but due to lack of evidence they are found “not guilty”. But that’s not the same as “the truth is this person did not commit murder”. It’s simply a finding that the state didn’t prove its case.
> The system weighs evidence and determines whether the preponderance of evidence
A formal process of examining evidence marshaled in the service of arguments supporting/refuting a claim sounds practically equivalent to most forms of determining truth. Because it is.
Not only that, but there are segments of the law where truth is explicitly invoked, ie "truth is an absolute defense against defamation."
If you believe so strongly in the ability of institutions to ferret out untruth then surely you’d be ok with the application of such knowledge. We have the ability to do real time analysis in communications such as email, text and cell phone calls so why not just delete any offending words from those communications in real time so that lies never spread? That way you have perfect freedom of speech, it’s just that your phone calls will be edited in real time so the other person never hears the banned words. Better yet, instead of just deleting then it could reword it ina simulation of your voice so that they only ever hear the accepted truth.
That depends how you draw the lines around the selfhood of Google. Did 2004 Google have the same intentions as 2021 Google?
Organizing information and making it useful sounds altruistically anti-misinformation. And maybe the search engine of the past was a net good against misinformation. And then maybe the ship attracted folks with less than ideal intentions over time, making your premise more and more true. Maybe 2031 Google will be much worse.
So much of the populace has a digital IV drip of constant connection to algorithmic political slogan propaganda machines. If people consume propaganda all day everyday you can't blame them for being highly propagandized.
What else can we possibly expect than this to happen?
I honestly see no evidence that open society is compatible with modern digital communication networks. An authoritarian contagion almost seems an emergent natural property of the network.
The cyberpunk religion I use to believe in couldn't have been more wrong.
I just take solace that I will have been born and lived at the perfect time and place in history. I will have got to be part of this very short lived religion that the internet is the greatest thing for liberalism and freedom ever created when it is really the complete opposite.
I can even zoom out enough to see the historical beauty in such a historic mistake.
Hard to live in more interesting times than right now.
I think we should be wary of enforcing a particular truth or narrative. I think society should be vigilant against government punishing people for what they think or say - that is anathema to the freedom guaranteed to us in the 1st Amendment.
However, your analysis doesn't acknowledge the very real war on truth being fought daily on a multitude of fronts. There are many actors, and many reasons they are doing it.
Some people just want to watch the world burn (troll armies, 4chan-like people, alt-right, etc). Some people want to keep ignorant people in their place, and pit people against each other (cable 'news", especially Fox).
People are being exposed to misinformation (deliberate lies about the state of current affairs) more and more easily than ever before.
It's like saying we have a freedom to hold whatever rocks we find on the ground, then someone finding a block of uranium. Sure, we have that freedom, but there is a very real danger and we need to figure how to deal with the more dangerous aspects of it. Censorship/government enforcement is a heavy hammer and I don't think it's the answer - but that doesn't mean there isn't a problem.
Indeed. When the founding father created the system the US has today, it was entirely intentional that powers be separated and check each other, that the house, senate and President were required to pass legislation, etc. The founding father recognized that the default is a slide towards authoritarianism, that’s just human nature. The system works when no one person can gather enough power to take over.
That answers the question of many about historical examples: "how they couldn't see they are being manipulated by totalitarians? why didn't they do something?" The same reason our current society thinks it's ok for the government to ban people for "misinformation" (the latest directive from White House demands internet-wide bans of whoever was accused of "misinformation" - without any due process of course, just on the word of White House, confirmed and implemented by the faceless army of Facebook underpaid overworked "moderators" team). Yes, there are people who are shocked and appalled by it - and they are speaking up. But the society largely either ignores them, or dismisses them as paranoid naysayers or partisan operators. That is going to cost us. And our descendants would wonder "how couldn't they see they are being obviously manipulated to cede control over their speech to the government-big tech oligarchy?!" How indeed.
From the top of the post: "We need to curb abuses that threaten our ability to provide these services."
Google has lawyers to minimize its legal liability. Turning some vapid terms of service clause into a slippery slope argument for the onrush of totalitarianism is taking things too far.
> democracy requires the government, or megacorporate cartels with a monopoly on public speech most likely acting as proxies for the government
Who says that individuals or business institutions making judgments and using their resources to arbitrate truth -- especially institutions like Google whose business is about information -- are acting "as proxies for the government" instead of taking responsibility for honestly arbitrating truth as they understand it?
And the idea that any attempt to arbitrate truth is equal to totalitarian oppression is exactly wrong. Every institution and individual has to do it. Certainly any society based on rule of law does; you can't apply law without a consensus about what the facts to apply it to are.
Some will be giving their honest assessment and some won't. The latter may be a less serious problem, but it's still pretty damn serious.
Imagine a society that censors the disinformation/propaganda that trans women are women out of a sincerely held belief that this is indeed disinformation. Oh wait, Russia already does that.
Why would American attempts at suppression be any better? Or any better than what happened during the McCarthy era?
LOL "McCarthy era?" -- like, what are you actually referring to here? Is this just bad thing word salad? The primary problem with the McCarthy era wasn't censorship, it was actually that McCarthy's BS accusations about communism got such wide play and buy-in when they should have been squashed by responsible people.
"American" attempts would be better... because in most cases this appears to be voluntary and where the state is involved it appears to be officials actually persuading some private actors on the merits of the idea rather than compelling people by force.
If I flag something on Twitter or Facebook as violating site rules, or an FHEO official tells Craigslist they're aware of posts violating housing discrimination rules, that doesn't make any of these sites my actor or state actors. It makes them people who care about either keeping that stuff off their site or keeping the law.
You clearly didn't read the links I posted. Please read the links I posted.
Also your logic falls apart considering the courts ruled last year that Trump wasn't allowed to block people on Twitter. You "flagging" or "blocking" something is very different from the government doing it or pressuring companies to do it. That makes them state actors as has been ruled multiple times in the past.
Nor are we talking about stuff which "breaks the law" here. By social media's own proven standards, they kept the lab leak theory off their platform for 1.5 years. They are censoring women who oppose men competing in their sports or entering their private areas like spas and bathrooms. Palestinians get censored under the guise of "anti-semitism" and Israelis/Christians get censored under the guise of "islamaphobia" depending upon which political side has power. If we had such big social media back in 2000s, they would be censoring anyone who spoke out against the war or there not being any WMDs. If you spoke out against the Syrian gas attack hoax, you would'd get censored too.
From Supreme Court of the United States opinion couple months ago:
> "But whatever may be said of other industries, there is clear historical precedent for regulating transportation and communications networks in a similar manner as traditional common carriers. Candeub 398–405. Telegraphs, for example, because they “resemble[d] railroad companies and other common carriers,” were “bound to serve all customers alike, without discrimination." ... "Internet platforms of course have their own First Amendment interests, but regulations that might affect speech are valid if they would have been permissible at the time of the founding. See United States v. Stevens, 559 U. S. 460, 468 (2010). The long history in this country and in England of restricting the exclusion right of common carriers and places of public accommodation may save similar regulations today from triggering heightened scrutiny—especially where a restriction would not prohibit the company from speaking or force the company to endorse the speech." ... "The similarities between some digital platforms and common carriers or places of public accommodation may give legislators strong arguments for similarly regulating digital platforms. [I]t stands to reason that if Congress may demand that telephone companies operate as common carriers, it can ask the same of ”digital platforms." ... "For example, although a “private entity is not ordinarily constrained by the First Amendment,” Halleck, 587 U. S., at ___, ___ (slip op., at 6, 9), it is if the government coerces or induces it to take action the government itself would not be permitted to do, such as censor expression of a lawful viewpoint. Ibid. Consider government threats. “People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around.” Bantam Books, Inc. v. Sullivan, 372 U. S. 58, 68 (1963). The government cannot accomplish through threats of adverse government action what the Constitution prohibits it from doing directly. See ibid.; Blum v. Yaretsky, 457 U. S. 991, 1004–1005 (1982). Under this doctrine, plaintiffs might have colorable claims against a digital platform if it took adverse action against them in response to government threats. The Second Circuit feared that then-President Trump cut off speech by using the features that Twitter made available to him. But if the aim is to ensure that speech is not smoth- ered, then the more glaring concern must perforce be the dominant digital platforms themselves."
> "As Twitter made clear, the right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions. This petition, unfortunately, affords us no opportunity to confront them."
A good democracy depends on its citizens being able to discuss issues with one another. If we disagree on basic incontrovertible facts, such as where someone was born, or what the measured efficacy of a treatment for a disease is, conversations with each other do not have value and we cannot have a democracy.
that's not accurate at all. look at the misleading medical information definition from Google
"Misleading content related to harmful health practices: Misleading health or medical content that promotes or encourages others to engage in practices that may lead to serious physical or emotional harm to individuals, or serious public health harm."
not only do they use the term misleading to define the term misleading, they clearly state any information that could cause serious public health harm is misleading. when people are screaming about vaccine misinformation they are often talking about discussing side effects of the vaccine, which causes hesitancy and therefore harm. if pressed for an example, they will pick up an absurd claim like 5g caused covid, but not the typical claims that the covid vaccine has reportedly killed thousands and injuries many more than that.
the labels disinformation, misinformation, misleading, potentially harmful are very orwellian, as many can take them to mean false information, but in fact these labels can be placed on perfectly true statements that are simply dissenting.
Ok I'm not pro censorship, but I'll bite your bait.
Firstly, this is a content policy, this is separate from actual enforcement. I would doubt that google enforces this proactively, rather they want a possibility to shut down what has now been dubbed 'fake news'.
Secondly, reading this with an open mind their focus seems to be on dangerous falsehoods, think things that can get people in prison (double voting?), health issues ('bleach anema for 3 year olds cures autism', 'vitamin D prevents covid', 'covid is an invention of the deep state', ...), or manipulated information that claims to represent another's view ('manipulated media ... that may pose a risk of egregious harm', e.g. I'd imagine a fake NYT article saying that masks are harmful).
Third, this is a private platform, not a government body enforcing its views.
Last, there are some dangerous lies. In Germany its illegal to state the holocaust never happened. You can discuss details and question parts of the narrative but not doubt that it ever happened. You can't have neo-nazis claiming that its all a big conspiracy and that others were at fault - which, if you know your history, is one of the very ways the nazis justified WW2. This law was an essential tool to counter similar tendencies after Germany lost WW2. In contrast, in Poland its illegal to state that Poles contributed to the holocaust, which is however a true fact that is just politically unwelcome. So the issue here is not whether there is a truth, and also not whether lies can be dangerous (bleach anema to get rid of your kids' autism!), but rather who does and how truth is arbited. A government/company shouldnt be able to shut down every dissenting opinion but I can't believe anyone honestly believes that there should be no way to challenge and limit the spread of dangerous lies. You can think and discuss what you want, but if you broadcast your views to a wide audience you should also be held to a higher standard. The real issue is who does the accounting.
I wish there were a way to indicate disagreement without hiding comments. Having "dislike" and "disagree" be the same arrow causes problem like hiding this.
I'm doing to take issue with your "dangerous lies" classification as not being a useful one. Every bit of censorship can be argued to be censoring dangerous lies. From Russian LGBT censorship to China's censorship of things undermining the narrative of Chinese greatness to older censorship of criticism of the king (in all his chosen by God glory).
"Every bit of censorship can be argued to be censoring dangerous lies."
This. Former Czechoslovak Socialist Republic and its secret police (StB) did not say "we are evil and we want to suppress information that contradicts whatever we need you to believe". It was a necessary struggle against Western ideodiversional centrals that were sowing lies among naive young population for nefarious purposes, of course.
Every censorship system will cloak itself in righteousness and necessity. Has been tried for hundreds of times. If anybody still accepts this argument at face value, they are likely ignorant of history.
> You can think and discuss what you want, but if you broadcast your views to a wide audience you should also be held to a higher standard. The real issue is who does the accounting.
It is exactly because you, nor anyone else, can come up with an acceptable solution to your last point, it is surely a lesser evil to have free speech, however "dangerous" it may turn, rather than have a benevolent accountant with the speech monopoly turn on us.
> Firstly, this is a content policy, this is separate from actual enforcement. I would doubt that google enforces this proactively, rather they want a possibility to shut down what has now been dubbed 'fake news'.
Discretionary enforcement power is part of the problem, not a mitigating factor. The policy itself simply gives them carte blanche to remove content with which they disagree:
> When applying these policies, we may make exceptions based on artistic, educational, documentary, or scientific considerations, or where there are other substantial benefits to the public from not taking action on the content.
Even if we give Google the benefit of the doubt and grant that initial enforcement could be judicious, wise, and a net positive for society (pretending like "a net positive for who?" is an easy question to settle), "substantial benefits to the public" is not a limiting principle.
History has taught us that without real, adversarial constraints, this power will always be mishandled and abused. Eventually, Google will make mistakes. In their zeal to prevent misinformation and harm, they will bury a promising drug therapy and it will cost lives. They will disallow evidence of a crime, and they will make exceptions that happen to benefit their biggest markets.
They have the right to do this, but it is surely wrong for us to delegate our judgement to them.
I'm sorry to see one of the more nuanced comments downvoted. Looks like the hacker news free speech anarchist task force is busy today.
Funny how they demand radical free speech, but downvote/flag anyone who disagrees, while using a forum where comments with too many downvotes are hidden automatically.
Here is a partial list of scientific consensus "deniers" proven right which this sort of censorship will either silence by big tech or due to self-censorship:
1. Ignaz Semmelweis, who suggested that doctors should wash their hands, and who eliminated puerpal fever as a result, was fired, harassed, forced to move, had his career destroyed, and died in a mental institution at age 47. All this because he went against consensus science.
2. Alfred Wegener, the geophysicist who first proposed continental drift, the basis of plate tectonics, was berated for over 40 years by mainstream geologists who organized to oppose him in favour of a trans-oceanic land bridge. All this because he went against consensus science.
3. Aristarchus of Samos, Copernicus, Kepler, Galileo, brilliant minds and leaders in their field all supported the heliocentric model. They were at some point either ignored, derided, vilified, or jailed for their beliefs.
All this because they went against consensus science.
4. J Harlen Bretz, the geologist who documented the catastrophic Missoula floods, was ridiculed and humiliated by uniformitarian "elders" for 30 years before his ideas were accepted. He first proposed that a giant flood raked Eastern Washington in prehistoric times, and who suffered ridicule and skepticism until decades of further research proved his thesis. All this because he went against consensus science. He was eventually awarded the Penrose Medal.
5. Carl F. Gauss, discoverer of non-Euclidean geometry, self-censored his own work for 30 years for fear of ridicule, reprisal, and relegation. It did not become known until after his death. Similar published work was ridiculed. His personal diaries indicate that he had made several important mathematical discoveries years or decades before his contemporaries published them. Scottish-American mathematician and writer Eric Temple Bell said that if Gauss had published all of his discoveries in a timely manner, he would have advanced mathematics by fifty years
All this because he went against consensus science.
6. Hans Alfven, a Nobel plasma physicist, showed that electric currents operate at large scales in the cosmos. His work was considered unorthodox and is still rejected despite providing answers to many of cosmology's problems.
All this because he went against consensus science.
7. Georg Cantor, creator of set theory in mathematics, was so fiercely attacked that he suffered long bouts of depression. He was called a charlatan and a corrupter of youth and his work was referred to as utter nonsense.
All this because he went against consensus science.
8. Kristian Birkeland, the man who explained the polar aurorae, had his views disputed and ridiculed as a fringe theory by mainstream scientists until fifty years after his death. He is thought by some to have committed suicide.
All this because he went against consensus science.
9. Gregor Mendel, founder of genetics, whose seminal paper was criticized by the scientific community, was ignored for over 35 years. Most of the leading scientists simply failed to understand his obscure and innovative work.
All this because he went against consensus science.
10. Michael Servetus discovered pulmonary circulation. As his work was deemed to be heretical, the inquisitors confiscated his property, arrested, imprisoned, tortured, and burned him at the stake atop a pyre of his own books.
All this because he went against consensus science.
11. Amedeo Avogadro's atomic-molecular theory was ignored by the scientific community, as was future similar work. It was confirmed four years after his death, yet it took fully one hundred years for his theory to be accepted.
All this because he went against consensus science.
Most humans are inherently closed to disruptive ideas that challenge their basic world view.
It may be a survival trait, in fact. Not all of us are brilliant mavericks, so we have to rely on group think and past practices. Some disruptions are indeed harmful and must be suppressed firmly for the good of the species, for example inbreeding with siblings and parents.
What's happening right now with the attempted suppression of "hate speech" and "dangerous misinformation" is a classic imposition of majority consensus on a restless, information-empowered population.
It is an attempt to regain control. Never before have humanity had so much decentralized power to disseminate information; anyone can quickly and easily put their ideas out to a vast audience.
Probably a certain amount of control is necessary to maintain order, but obviously, how much is still up for debate.
The suppression of innovation that you have outlined is likely a small fraction of the total. How much might we have advanced, had these people's ideas not been suppressed? How much human suffering might have been averted?
Or would we have merely developed the tools to destroy ourselves and our world that much sooner?
Exactly. What is misinformation? What you define as misinformation right now could be information in a few months.
Entire discussions on things like ivermectin to treat covid were banned from YouTube as “misinformation”. Now some studies are coming out that might show that it’s not. At least it deserves a discussion but it has been censored as misinformation. So we lost an entire year of treating people around the world because SOMEONE deemed ivermectin misinformation??
What about the Wuhan lab origination? Last year that was derided as conspiracy theories and misinformation, but now it’s basically accepted as fact.
Who determines what is misinformation? What if Trump wins again or someone worse like Tom Cotton? And they set up a government panel that they decide is misinformation and companies like Google need to follow it? Who determines who is spreading misinformation now?
10 years ago a low fat diet was deemed “heart healthy”. Now that is classified as misinformation. Should low fat diets be censored by Google?
This is the problem. What you thought was misinformation is only really a few studies away from being flipped around.
> What you thought was misinformation is only really a few studies away from being flipped around.
To be clear, that matters because by censoring "misinformation" you are killing these studies, therefore we are by definition stuck wherever we are right now.
New science and "misinformation" as defined by our digital overlords intersect.
> This is the problem. What you thought was misinformation is only really a few studies away from being flipped around.
The big tech companies are destroying the value and credibility of our information ecosystems, by treating the first draft of history and science like it's the last. The day after an event [0], elite opinion becomes "information" and anything that contradicts that is "misinformation."
[0] I meant this as a slightly comical overstatement, but in many cases it's quite literal.
"People in the U.S. seem able to recognize that China’s censorship of the internet is bad. They say: “It’s so authoritarian, tyrannical, terrible, a human rights violation.” Everyone sees that, but then when it happens to us, here, we say, “Oh, but it’s a private company doing it. Google is entitled to do what it wants.” What people don’t realize is the majority of censorship in China is being carried out by private companies.
Rebecca MacKinnon, former CNN Bureau chief for Beijing and Tokyo, wrote a book called Consent of the Network that lays all this out. She says, “This is one of the features of Chinese internet censorship and surveillance—that it's actually carried out primarily by private sector companies, by the tech platforms and services, not by the police. And that the companies that run China's internet services and platforms are acting as an extension of state power.”
The people who make that argument don’t realize how close we are to the same model. There are two layers. Everyone’s familiar with “The Great Firewall of China,” where they’re blocking out foreign websites. Well, the US does that too. We just shut down Press TV, which is Iran’s PBS, for instance. We mimic that first layer as well, and now there’s also the second layer, internally, that involves private companies doing most of the censorship."
We aren’t talking about something as basic as truth and untruth.
This is Integer and float vs malformed code.
Misinformation is code. It’s designed to specifically take advantage of gaps in our society and spread in such a way that makes it harder to challenge it.
In a free society, people must be allowed the ability to discern for themselves which information is useful, true, or otherwise. We do not need a state or its proxies to tell us which information is correct.
Google has a monopoly on searching for information, which makes it as different to me as a nuclear bomb is to a butterknife. They need to play by different rules
Not only does Google not hold a true monopoly on search (they hold top mindshare, but there are several other easily available search engines one of which I use frequently), this particular brouhaha is about hosting, not search, on which they don't hold anything that could even be reasonably confused with a monopoly.
It's even more alarming that the host of my data is actively scanning those 1s and 0s for signs of deviation from truth and sending those 1s and 0s it views as untruths down a memory hole. I mean, why stop there, why not just correct the untruths? Flip a few 1s here that should be 0s. This strikes at the core value proposition of tech and who is who is boss of what data.
One can agree they should not be compelled while ALSO thinking it would be much better for society that they stop censoring. Just because setting up an institution to compel would be worse than the censorship we're trying to fix doesn't mean the censorship is good.
That's a nice fantasy. People don't get enlightened by banging rocks to start fires for 18 years then becoming a philosopher.
The art of thinking critically and using logic takes training, and far too many people don't seem to have those skills, due to an inadequate education system in the United States.
Free people should be "allowed" to listen to, and say, what they want. I don't even think Tucker Carlson should be silenced by the government, no matter how much better I think society would be without him on the air.
I think there is a space for content to be labelled as "bullshit" by the people hosting it. Youtube labeling anti-vax lies as "misinformation", the FCC disallowing Fox from using the term "News" on its marquee during their "Four Hours of Hate" from 5-9pm.
Just as fast food is allowed to be sold to people, but they're required to disclose nutrition data - people should be allowed to listen to and spout bullshit, but it should be labeled.
That's so naïve.. If only sufficient amounts of people were capable of discerning for themselves which information is useful, true, etc.
The rise of batshit crazy and easily demonstrably false conspiracy theories like Qanon, flat earthers, 5G, Covid is fake/a Chinese bioweapon/etc. Pizzagate, what have you, and clearly empty populist politicians and political movements across the world show that way too much people struggle with that. And the choices they make impact all of us.
I honestly don't think there's a solution for this. Any authority on "truth" will be abused, but what we have now doesn't work either.
Well yes, if we had a perfect Oracle of Truth we wouldn't need the freedom to discuss and find out the truth. We don't and governments not only do not have such an Oracle, they cannot have such an Oracle due to the pressures on governments and the people that make them up preventing them from being unbiased (e.g. the tendencies for institutions such as government components to grow larger, defend their power and push to continue to exist).
Wielding the baton of "misinformation", or in general, fear of falsehood/insecurity etc, to whack those who the ruling class disagrees with is also code designed to specifically take advantage of the gaps in a healthy bureaucracy and seize control over it in such a way that makes it hard to challenge it.
"Misinformation is code. It’s designed to specifically take advantage of gaps in our society and spread in such a way that makes it harder to challenge it."
Are you aware that ideas like abolition of slavery, civil rights or acceptance of gays had to spread precisely in the same way, through gaps in the contemporary societies that made harder to challenge them, often because open declaration of such ideas would mean major trouble for the speakers?
Do you think that our current society is perfect in this regard and thus can suppress further functionality of such mechanisms without adverse consequences down the line?
No, this is bafflegab. You're creating what is at best a contrived, vague, and inaccurate metaphor to make a distinction without a difference between misinformation and falsehood, so that you will have a fig leaf to pretend that you are acting in an objective manner with clear delineations when banning information you don't like as "misinformation."
Except you won't be doing the banning, and it's not going to end well for any of us after you help to give the people who will be doing it the power they want with rationalizations like this.
People can figure out the truth, and if they can't, we have bigger problems. I'd like to live in a world of common sense vs "elite" judgements
The amazing thing about these super political fights is like 90% of people are on the sidelines going "what's wrong with both of you?" The people that always end up looking like idiots are the idealogues
Agreed, its important to note that these modern day, illiberals couch a lot of their phraseology in sophomoric nuance.
"XXXX statement is misinformation, because it lacks context"
"XXXX statement is misinformation, because it is still up for debate"
"XXXX statement is misinformation, because it is offensive to class YYYY"
"XXXX statement is misinformation, because it is unfair to person ZZZZ"
These are logical fallacies presented as civility. Applying them uniformly would me desperately wrong. Having double standards, and applying them based on partisan bias is nothing short of evil.
Glenn Greenwald said it best. If you are okay with these practices, you are a political authoritarian. No other information is needed to determine this about you.
It's amazing the way the political left has savegely gone against greenwald. We need more journalists like him not less. He has all the proper credentials (gay, liberal), he just happens to speak the truth so, apparently, fuck him. If we had more greenwalds the world would be a much better place
Your argument lacks any nuance, to the degree that it makes it flat out wrong.
No one is suggesting regulating truth generally, but rather specific cases that are deemed dangerous [0].
Now, you could make a slippery slope argument starting from there and that would be a valid, but different discussion.
The slippery slope argument sometimes applies and sometimes it doesn't. It depends on the specific situation. There are examples around the world of governments that very selectively suppress narrowly defined types of speech, in order to protect other values believed to be equally fundamental, but defend the freedom of all others.
[0] Within the bounds of the freedoms that a private company has to operate as it sees fit, or by means of democratic decisions.
It's true that the US government has always compelled private platforms to remove speech if it is illegal as determined by the legislative branch or common law (e.g., hosting child porn). What's unique about the current situation is that what's being censored has direct influence from non-legislative bodies such as the executive branch and the CDC, and what is out of bounds lacks specification and is subject to change according to the whims of a small handful of people.
This is a big change because the legislative body is significantly constrained in what it can censor through multiple mechanisms. The slippery slope concern falls flat in that old context, but not in this new context where 1 or 2 people (who we don't see and who may not have even been directly elected) can label something as misinformation and have it scrubbed.
I think I see what you're trying to say even if you're doing so very obliquely - the point is that this never was and never will be an issue that it makes sense to think of in absolutes.
Speech has never been free in this absolute sense anywhere, because, at the very least (!), there are cases where speech has obvious, immediate, terrible consequences.
That means that the difficulty lies in figuring out where exactly to draw a line along a blurry boundary. Hence the slippery slope issue.
Government: can imprison, fine, and enslave you. Corporation: can ban you from online platform.
Companies absolutely and always have enforced their own version of "truth". While Google says this thousands of other companies of similar size are regulating their version of "truth" internally. As we speak lobbyists are paid to make sure senators will vote along their version of "truth". There is no such thing as neutrality, and never has been, only the illusion of inaction.
American society has been slowly convinced, and then the rest of the world by extension, that this was absolutely necessary to combat the next big threat.
First it was to "combat hate", that speech was limited according to what partisan big tech companies decided was fair. It was said it had real-life consequences. Harm. So it should be banned.
Then anything from certain parties or ideologies became hate if labeled so by the tech giants. So out they went.
Now, anything questioning the authoritarian narrative of the pandemic is labeled as such.
The slippery slope that people tried to deny with claims of "private companies" has gotten higher and keeps going deeper.
Talking about freedom, freedom of speech or human rights gets you mocked in certain spaces. It's mind-boggling.
U.S. barely has "hate speech" laws, so I don't know what you're talking about there.
"Big tech" decided to ban people for being hateful to others on their platforms. Those platforms don't exist for your protection against government actors. Nor should it.
What you're saying is "a company banning certain forms of speech on its platforms does not constitute censorship", which is true at least in the strict definition of "censorship", but does not contradict what the GP is saying. What the GP is talking about is a shift in American attitudes that leads to the acceptance of such policies, whereas in earlier times they might have been rejected or the companies in question might not have even thought to implement them.
I don't at all believe people would reject it. People have been censoring content for a long time in far worse manners. Early 90's had plenty of "obscenity law" enforced, Ronald Reagan and regulation of content in games and music was a thing. People cheered it on, and that was actual government censorship, not corporate.
People agree with censorship if it's in in agreement with their belief.
To be clear, I don't necessarily agree with isaacremuant.
Obscenity laws are kind of different, since it's perfectly possible to add or remove obscenities from an utterance without substantially affecting the message. I'm not saying I agree with such laws, just stating the facts. What are examples where specific types of messages were banned? E.g. hate speech laws, blasphemy laws, lese majeste laws, etc. In Western democracies, the only examples that come to mind are recent.
>People agree with censorship if it's in in agreement with their belief.
Agreed, generally speaking.
Geez, what did I say that was so disagreeable? This is why I don't have a permanent account here. Whatever.
I think you're arguing a strawman and didn't really address anything I said.
Of course, my point is seemingly unpopular in very partisan websites where it's seen as a rep/dem or left/right issue (US centric) but the fact that we see more and more of these types of articles (today an EFF one) shows there's more of us, concerned by this arbitrating of truth by a certain group of ideologes and the people who agree with them where they leverage their power to prescribe what speech is worth transmitting and which is worth censoring.
It’s also important to understand that the only reason anyone is talking about misinformation right now is because the entire world watched in horror at the events of January 6th, a direct attempt to “seize control over free societies”. (Admittedly a poor attempt at it though) which was sparked by wild accusations, unsubstantiated assertions, and a mountain of failed litigation by “the previous guy”.
No amount of whitewashing or misdirection of what happened that day or the factors that lead up to it will change that certain “free speech” said by the right mouth, believed by an angry undereducated group can lead to actual loss of our free society. Just look at the insane amount of legislation being passed in certain red states, all based and cheered on by the exact same lie..
We can all argue the merits of google, but that’s just a method to control the conversation by those who seek our society for there own dystopian Handmaids Tale version, and deflection from what put us in this sitaution. A lie, perpetrated by the president, enabled by his supporters, and a Ministry of Truth style belief in what that government said, that’s leading to voter disenfranchisement and loss of faith in the very foundation of democracy. Period.
As for the medical information that I disagree with. If people want to believe in lies, I have enough “don’t tread on me” to let them. Personally that’s very much a game of “play stupid games win stupid prices”. None of my business have at it
You are saying that this is a lie as if that's math or physics instead of a debatable political point and then circling all the wagons around that determination.
The idea that the coronavirus came from a lab used to be one of those circle the wagons points that anyone could get cancelled for. The evidence is so overwhelming that you're now allowed to post that to social media without getting cancelled. How would that information have gotten out there if the cancel network had been perfect?
They can’t share it because they don’t have it. One scientist said it looked like it came from a lab and all the reactionaries latched onto this one statement as if it was the real truth being hidden from them. They promptly ignored when that same scientist admitted he overstated his position because that conflicts with their world view and censorship and persecution complex.
My original post said absolutely nothing about the rona though yes it a physics or math problem. I’ve since amended it
Trump lied, period. And it lead to and is leading to a lot of negatives.
As for the virus personally I don’t care where it came from but people can believe whatever they want. We have a vaccine now, don’t want it then don’t, doesn’t affect those that chose to. “The big lie” however affects us all, and that’s a problem.
In the case of covid the math changes. Generally speaking if it affects only you, have at it. If it spills out and affects others, problem. Though I’m not gonna argue the pedantry of that, that can devolve quickly
I have never heard any "truth" so sacred in American politics that the other side had to simply refer to the people opposing this "truth" as believing in "the big lie." It's Poe's law level absurdity.
Trumpers I know tell me liberals are all in on a big conspiracy lie to prop up Biden, so, I certainly have seen it on the other side.
It's not unsurprising, this is a two sided political party war on information. One side is misinformed, one side is less misinformed. Which is it? Nobody can agree.
Just because someone shouted an idea without proof that later might turn out to be partially right doesn't make it OK if shouting that idea was dangerous.
If I shout fire in the cinema and 1 minute later a fire starts it was still a bad thing to do.
Note too that for this specific conspirancy, it was maybe 30% about the lab and 70% about the Chinese government spreading it on purpose as a bioweapon or for some other nefarious purpose, the latter of which is still not widely accepted even as a possibility (as opposed to a containment failure)
> the only reason anyone is talking about misinformation right now is because the entire world watched in horror at the events of January 6th
Just as a point of fact, that's false. Usage spiked in late 2019. Nothing special happens on this graph in Jan 2021, and the term is surprisingly in decline right now:
I saw a low-level riot with a few broken windows that was endlessly hyped over and over again ad-nauseam. So, no, the rest of the world wasn't horrified - we were wondering what was the fuss.
I was far more worried about the BLM riots which burned entire districts - since I had family affected.
Using Jan 6th as an excuse for government censorship is utter nonsense. What is misleading today can become fact tomorrow very, very quickly.
why are you lying about entire districts being burned down by BLM? Surely you can provide some actual evidence as to this happening. Right now is just looks like more completely unsupported right wing hysteria.
This is laughable. There were hundreds of buildings burnt down and you ask for evidence ? Please do your own research. I strongly suggest listening to some-thing other than cnn/nbc. I don't listen to either right-wing or left-wing media.
"In their wake, vandals left a trail of smashed doors and windows, covered hundreds of boarded-up businesses with graffiti and set fire to nearly 150 buildings, with dozens burned to the ground. Pharmacies, groceries, liquor stores, tobacco shops and cell phone stores were ransacked, losing thousands of dollars in stolen merchandise. Many were looted repeatedly over consecutive nights"
"Three hundred seventy-some miles south-east of Minneapolis and about sixty miles north of Chicago on Lake Michigan in Kenosha, the family-run car dealership of an Indian immigrant was burned down by the rioters. The owner, trying to hold back his tear, told reporters that BLM rioters burned his lot two nights in a row destroying all the cars"
> the entire world watched in horror at the events of January 6th
Umm, no. I’d rather say it was a mildly interesting event for the entire world outside of the US. For instance, I don’t remember myself experiencing horror really.
This implies that being misled due to being under-educated on a topic is a choice. Otherwise, the "don't tread on me" argument kind of falls apart here.
No, the world watched in horror as American cities burned for an entire year while politicians and media whores, thoroughly protected by police and private security, did nothing.
Police were ordered to take a knee to the mob.
We were told that this was justice.
But the second those powerful people were even slightly threatened, the gloves came off.
Only one person was murdered on the 6th. A veteran and Trump voter named Ashli Babbitt, killed by the police. And Democrats celebrated her death.
She was trying to force her way into a chamber where she and the rest of the mob intended to do harm to elected politicians, and disrupt US democratic process.
That’s not really murder. And no, the rest of the world did not look on in horror at the BLM protests. They looked in horror at what caused them.
> And no, the rest of the world did not look on in horror at the BLM protests. They looked in horror at what caused them.
Speak for yourself - I am part of the world that looked on in horror at the media's sanction of open violence against the people by a violent minority.
Why are you spreading more lies? You are explicitly part of the problem. What cities were burnt down? Can you name one city that was burnt? Can you name even one neighborhood?
> the only reason anyone is talking about misinformation right now is because the entire world watched in horror at the events of January 6th, a direct attempt to “seize control over free societies”
Regardless of if the riot on Jan 6 was an attempt to seize the government (it wasn’t), this line of reasoning doesn’t track at all since the concept of “misinformation” as a public enemy has been brought up ad naseum since at least 2016.
I disagree. While dubbing things “misinformation” was extremely prevalent coming from democrats and those on the left, things like Trump and his supporters dubbing certain reports “fake news” show everyone has concerns.
Where have you been? Hall monitors at CNN and other outletd like Oliver Darcy have been shrieking about "misinformation," and successfully lobbying to get people censored on that basis, constantly for the last four years.
Recall that the "fake news" scare (and "post-truth era" etc) was actually started by the media before Trump appropriated it against them. And in most cases it was used against people for expressing skeptic towards favored conspiracy theories about the 2016 election.
" the entire world watched in horror at the events of January 6th, a direct attempt to “seize control over free societies”. (Admittedly a poor attempt at it though)"
Ask the rest of the world before you speak in their name. From the other side of the puddle, Jan 6th was a poorly executed riot that was mildly interesting mainly because of the obvious kooks (such as the Shaman) and their outlandish clothing. Don't try to repaint it as a surrogate coup just because polarized American society yearns for Big Defining Events. There is nothing to yearn for.
And historical coups, even the unsuccessful-but-plausible ones look very, very different. Usually a lot more bloody, too.
But as far as Ministry of Truth goes, I am with you. Whatever institutions people build, they should imagine them in hands of their worst enemies.
“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.” -George Orwell, 1984
Please read The Captive Mind for the version based on true events.
Spoiler: the Poles discovered that a political movement with methods and motivations which should sound very familiar didn't turn out to be the good guys after all.
The road to hell is paved with "good intentions". We desperately need to find a globally adoptable alternative to google and the services that it provides. Docs, Sheets, Drive, etc. are fantastic services in that they work really well on a massive scale. However, Google's increasing role as an arbiter of right vs wrong and a steward of information puts too much power into the hands of one corporation, whose best interests are provably not aligned with that of the general population.
I've been working as a SWE at google (in ads...) for over two years and I've really started to loathe it over the past year. The pay is fantastic and it's really hard to walk away from that, but the idea that they are not (or at least no longer) contributing to the better world that I think we need, has started to weigh heavier and heavier on me...
We should be able to implement services like these, that are free of ads, on globally distributed infrastructure, with no central authority, to have truly free-flowing information.
I think your fundamental error is in the fact that you think that a private company (and market competition) can fix these issues. It seems that many people on HN are just waiting for the new Savior company, that will magically have incentives to fight for them instead of making money. It's like hoping for market competition create health regulation in the food industry.
Turns out, not even Apple is that messiah, and perhaps the solution isn't in demanding private companies to be your regulators and defenders of good morals and truth. What happened to having specialized agencies regulate and inspect industries?
The other truth is we're all outraged when these companies host some stuff that we don't like ... and get upset when they don't host the stuff we do like.
Consumers aren't rational. Neither are their demands.
I'm probably no more rational than anyone else, but I'm honest that I sure as hell don't want to give money to a service that is happy to host some violent folks content / garbage...
> The other truth is we're all outraged when these companies host some stuff that we don't like
Please, speak for yourself. I think these companies should host absolutely everything[0]. Between 2010-2016 was the golden age for these companies actually being free and open.
Edit: Within the law. To be honest, there is very little I see that should be censored beyond CP.
> Between 2010-2016 was the golden age for these companies actually being free and open.
Were they really though? I'm pretty sure there were plenty of things, especially things deemed to be "Intellectual Property violating", which were strictly banned on all these platforms far before 2010. One example is that Google has been removing search results for as long as I can remember.
In other words, what was in your perspective "free and open" was very restrictive and politically pro-corporate from my perspective --- hardly an "objective stance" but instead a very political stance on what was permitted and what was forbidden.
Ok Youtube for example has had a longstanding policy of removing videos using music clips without the rights (I assume proactively, not just in response to DMA notices). This is essentially an automatic mechanism, possibly overly aggressive, to stay within the law- find unlicensed music and silence the video. But like stopping me from infringing on a Metallica song is clearly categorically different from actually censoring a video about Coronavirus that doesn’t comform to some standard or is accused of spreading misinformation.
I brought up copyright as just one example, these platforms have many reasons (not always given) for deleting stuff, ranging from political content or activism to copyright. Most ToS's are quite arbitrary. People have been talking about this for a long time as well -- "Chilling Effects" turns 20 this year! At the risk of being a bit unkind it seems like it's mostly just conservatives who were "living under a rock" until now, and now suddenly have loads of opinions on it.
I don’t think he was talking about search results. Rather other Google products like Drive. People used to share Hollywood films on Drive, for example.
Huh, I never have tried this, but I'm pretty sure sharing hollywood films on drive has always violated their Terms of Service, although I could see them not being great at enforcement all the time.
If Drive was that permissive, then upload sites that tended to "look the other way", like Rapidshare, MegaUpload, etc, would never have existed.
Isn't the point of the free market that consumers are free to vote with their dollars?
I don't want to give my money to companies that host things that I find morally abhorrent. That doesn't mean other companies can't host that data, simply that I don't want any of my dollars to go to those companies.
If the vast majority of consumers also don't want to give money to companies that host that content, that's the hosting company's problem. The free market is speaking. Nobody is making it illegal for that data to be hosted, but nobody is obliged to pay for it to be hosted. So it seems to me that things are working as intended.
That is the point of the free market. But deplatforming certain kinds of offensive content is regressive.
For instance, in the not-so-distant past many Americans found interracial marriage morally abhorrent. Wikipedia says only 5% of Americans thought interracial marriage should be legal in the 1950s. In today’s environment, that leads to deplatforming those who would’ve supported marriage equality. This is not something we would desire.
People are full of prejudices. I’m sure our grandchildren will look back in horror at ours. Let’s not deplatform them for that.
it's sarcasm, sadly these days the main media and our school districts are all about 'equity', teachers are all re-trained to use equity instead of equality right now.
The subtle difference you’re pointing out is significant. Equality under the law is moral and ethical. Forced equality of outcomes is immoral and unethical, and is just collectivist Marxism rebranded.
There's a difference between disagreeing with a position and finding it "abhorrent".
Your evidence with regards to interracial marriages does not support your point. Whether "something can be discussed" is an entirely different question than than "do you support position X". The very fact that they were able to take a poll is strong evidence that talking about it was not verboten.
This is simply not true, and a strange argument. Child pornography is discussed because people find it abhorrent. US politics has largely revolved around the prevention of interracial relationships for at least a century and a half, so the idea that people didn't (and don't) find them abhorrent is bizarre and ahistorical.
It’s not “active” support if the algorithm acts in a content neutral fashion, for example based on engagement metrics. In such a situation, changing the algorithm to artificially not let allegedly-false content be discovered is actively supporting the opposing viewpoints. Leaving the algorithm to act without artificial content-specific modification is not active support. Tolerance would be leaving the algorithms alone.
Former Googler here (11 1/2 years, including in Ads)
The idea that algorithms are "neutral" is laughable. There is a loosely organized group of activists out there who are aware of how these algorithms work and actively manipulate them.
"Engagement metrics" are nothing more than these people pushing the buttons.
I don't really understand why tech companies, like Google, go so far out of the way to maintain the image of being neutral. I agree they have a right to censor content they choose for whatever reason but what I don't understand is why they try appear to be neutral about their decisions. It feels like everyone is aware of what is going on, even other commenters who support Google censorship admit they approve of the bias.
So why do tech companies cling to this line of being neutral when no one really seems to accept it and they themselves have no intention of being neutral? I feel like there wouldn't be any conflict about policies or complaints they have to deal with if they were more honest. Maybe it has to do with section 230. I don't know but I feel like we would be better off if consumers had more information.
Every MITM-as-a-service starts off by being a neutral conduit to attract users, and then slowly adds restrictions to appease advertisers. But users never appreciate additional restrictions, and so Google (et al) have to keep marketing themselves as general hosts lest they lose even more mindshare.
Google is a legal construct. I can’t go have a coffee with Google. I can’t get a high five from Google. Google will do whatever our laws say it has to do in exchange for liability protections for its owners.
If you can find a majority to agree you can change the laws. It seems unlikely though. I’m not even sure what you would change the law to be. Current reading of the US constitution says that Google has the same free speech rights that you do.
They are not there to improve society, they are there to make money, and if you think different you are a fool.
Republicans fought to make companies people, and to be not regulated. And now are crying when it fired back, because forcing a company to publish or block something is actually stepping on its first amendment right.
You won't really have such platform, unless it is done through a well established non-profit or through government (as long as government is Democratic and checks and balances work correctly).
Not "like life threatening" - Bad for our society.
In that free public conversation is centrally crucial to the sanity of our society.
And having that public conversation controlled by a profit-seeking entity is definitely detrimental to that conversation. And thus detrimental to the sanity of our society.
And an insane society is obviously all kinds of threatening.
Again, Google is not stopping you from using your speech.
If they ban you, they are telling you that they don't want you on their private property.
They don't take away your internet connection.
When did society need Google, or Twitter, or FB to function? They do not. Those are simply three websites. There are literally millions more.
If this site banned me, I lost literally nothing. If Google Drive banned me(assuming I have an account, which I do not) I lost literally nothing. Same for FB. Same for Twitter.
The truth of the matter is that the right-wing userbase of HN is deathly scared of being marginalized in wider society. Seeing things like Parler getting kicked off of AWS, Facebook Fact-Checking moderation and now this makes them scared.
That is a problem except that no one gets booted for being conservative, so it is not really a problem.
The top performers on FB are conservative pages for crying out loud!
They are afraid of much, which is the problem. It is irrational fear and based on ignorance and hatred.
Parler earned their ban 1000 times over. When sites get shut down by their hosts because its users were openly inciting violence that is not a problem.
I have yet to see someone get booted for following the platform's rules and merely being conservative. People from all over the political spectrum get banned every day.
I disagree. I think the algorithms are fundamentally immoral because they promote content that gets "engagement". Which includes and in many cases prioritizes content that people have engaged with because it causes a negative response. Rather than pushing good* content, it prioritizes lowest common denominator, reality tv, desperate pundit, fast food, self congratulatory, outrage porn garbage.
*By good, I simply mean thoughtful, high quality, factual, educational, or otherwise uplifting content regardless of politics
I don't think "Good" is unambiguous enough to trust the platforms to promote it. How about simply "related"? Show people the content they've explicitly asked for. If people explicitly ask for outrageous content, then fine, but we needn't force feed it to society.
Algorithms don't work like this though: content that feeds outrage disproportionately outranks content which doesn't.
Algorithms don't discriminate "content" by it's actual content: they keyword match and look for clicks, and build a pretty perfect radicalization pathway more easily then they build a discourse [1].
You have probably experienced this: almost everyone has the experience of wanting to see some particular YouTube video, but opened it in an Incognito tab (or just avoided it) explicitly because they know the topic will prime the YouTube homepage to fill with nothing but things you don't want to see.
I always think I can draw the line in the sand as a very rational and relatively well read person.
But then I remember that the best thinkers the world has ever seen (Plato, Aristotle, Cicero, ad nauseum) were never able to look beyond their noses to see the human suffering of others.
Aka, they were perfectly happy to have a society run by slaves, to ignore the plight of the poor and sick, etc.
Perhaps these are the "best thinkers that the world has ever seen" because they said stuff that was beneficial for (some) powers that be. E.g. Plato-Aristotle-line/myth is directly linked to Alexander the Great.
Worth noting that there were very influential thinkers and entire schools of thought that looked beyond their noses. A good example are cynics/Diogenes the Dog, who may well have been more influential than the Platonic line. E.g. (as per anecdotes we have left) Alexander the Great had great respect to Diogenes, who totally ridiculed Alexander's (and Plato's) position.
Also stoics (e.g. Marcus Aurelius) are quite direct descendants of cynics and not ashamed of this at all.
More I look into classical philosophy, or the "myth" of academia, more it seems that it's mostly a fabrication of perhaps scholastics.
This is a very important point. Maybe there were some great philosophers in their time that argued against it and were ridiculed or didn't reach us through time.
I'm curious what you mean by being a fabrication? Their ideas were real and they've shaped history throughout time one way or another
Fabrication is perhaps too strong a term, but the separation between history and myth has not always been that strong. For example it was common (and accepted) to write stuff in some famous person's name.
I don't think it makes the content itself any worse, but it's difficult to know what was really historical.
I don't formally study this, but such problems become quite apparent when I try to e.g. find out historical sources for some philosophical statements or anecdotes. Probably not that different from how people attribute all sorts of "smart stuff" to Einstein.
Edit: by "scholastic fabrication" I mean that scholastics spent a lot of time "interpreting" especially Aristotle (and tried to make it compatible with the Bible). I'm guessing a lot of what we think is "greek philosophy" may be from these interpretations.
Thank you for the clarification. I'll read more on the subject.
History is god damn hard. That's why it's useful to read the source material whenever possible.
I don't know how many times I've seen The Parable Of the Cave being used, but reading The Republic, really makes you understand what Plato meant with that story.
It's hard for most people to read that stuff though. I've only scratches the surface. It's easier to trust others to donor for us and distill the information.
And in each century, the lessons learned from the same material may be different too.
It's usually next to impossible to read the real source material, as it's in literally ancient language and written in context and for purposes that are hard to understand.
For most things it probably doesn't matter that much. For example classical philosophy (or its common translations/interpretations) provides a sort of "shared language" for academia, regardless of how historical it is. That's why I tend to think it more as a myth unless there's something specifically historically intetesting.
Most of the classical stories, e.g. the Cave, have "transcended" the original context anyway, and are in a sense richer nowadays.
You're making the mistake of assuming morality from your current time, place, and culture is universal morality. You find slavery morally objectionable because the current cultural understanding is that slavery is morally objectionable. Future obedm might find it equally abhorrent that you, for example, routinely consumed the flesh of sentient animals or openly released carbon into the atmosphere for personal gain, or probably a million other things that will be completely unimaginable in polite society 500 years from now.
That may be true of Plato, Aristotle, etc. But one thing I have learned from history is that there is almost always a contingent of people that do find terrible things like slavery abhorrent and were even outspoken about it. But if you are an elite, and benefit greatly from something, you are probably much less likely to be outspoken against it.
And then from inside your link, there seemed to be even more critical voices:
"Then again, the Stoics were famous for challenging common conceptions, and the founder of the school, Zeno of Citium, had declared slavery an evil in his Republic"
So now I think it is a shame, that I barely ever heard of him, despite having heard from all the others great (but slavery endorsing) greek minds. And sure, slavery was very common everywhere at that time. One more reason maybe to celebrate early free thinkers?
Most philosophers likely come from the elite classes in the past. You don't have time to sit around and think and write if you have to worry where your next meal comes from.
We’re not talking about individual choice but an inherent ir/rationality in censorious behavior.
Vast majority of the consumers probably make pragmatic rather than idealistic consumption choices. Eg when you source a new iPhone, you source certain unethical labor practices. When you make use of the US dollar, you make use of some amount of atrocities that built its international purchasing power (eg any of the petrodollar wars). I bet those rarely bother even the most so-called “idealistic” consumers because a) it is a hard calculus to compute b) it is impossible to live when every “impure” thing is removed from use.
The difference with public content hosting is being able to twist arms to make them take down stuff and conform to an image of virtuousness which we narcissistically and psuedo-religiously identify with. It is not about the real damage the contents pose, it is our intolerance to being seen as a “person who can use such sites”. The threat is to our confirmation bias, in this case the confirmation of an idea that there is a clear right and wrong and we are definitely right.
I'm confused. I don't see the problem. What's the difference between the image of virtue and just virtue?
If the majority of people believe that images of virtuousness are what they want, then that's just what they want. People aren't computing the outcome, their ethics are based on appearances and always have been. The internet doesn't change that fact. Whether it was in the middle ages or the post-industrial period or today, virtue has always been performative. So I don't really see what you think your argument demonstrates.
The enlightenment concept of free speech is likely a minority viewpoint among the people of the world. However I also think it is the correct view and that corporations or governments looking to censor speech are infringing on human rights, and believe in fighting for it the same as I would fight against racism, slavery, religious discrimination, authoritarianism, and so on. Just because a lot of people believe something is ok does not make it right.
> What's the difference between the image of virtue and just virtue?
What’s the difference between a real car and a perfect cardboard replica? One has functionality and interiority. The other is just exteriority.
In the King Midas story, he wants everything to be mindlessly golden and thus turns them into unusable shiny crap. He got the golden exterior alright, with none of the real goldenness, goodness they would afford him.
Your example is a bit facile. A cardboard car does not function as a car does. It sounds almost like you are saying something but you never explain exactly what’s wrong with performative virtue.
But you have to define what is virtuous before you can claim that performative virtue is “fake”. The young people of today are no less virtuous than their elders, despite the fact that their elders (like virtually every generation before them) complain that they are immoral.
But "performative virtue" is--definitionally--fake. If it were real, it would just be "virtue". (And similarly, "political correctness" is something not quite correct--otherwise it would just be "correctness"!)
It's an actor "performing" as a character for a few hours, and then reverting back to their original personality.
> But you have to define what is virtuous before you can claim that performative virtue is “fake”.
No, strictly speaking you don't have to define anything. Whatever virtue is, it's consistently that. "Performative virtue" is inconsistent, often hypocritical, and therefore inauthentic.
> The young people of today are no less virtuous than their elders...
I can't speak to virtue in general, but with regards to duplicity it does seem like there's increasingly more of it:
1) young people still have access to whatever methods of duplicity old generations had, and can additionally virtue signal on social media on an unprecedented scale.
2) it seems like it's simply becoming acceptable to lie. Politicians will directly contradict their own video evidence, multiple times a day, and everyone shrugs and moves on. We've given up on norms of discourse and civility. To be clear, I'm not surprised that lies are being told; I'm surprised that there appears to be zero interest or any repercussions.
3) objective truth itself is under attack. It's becoming normalized that anyone can say whatever they feel at that moment and that they have "their truth" and I have "my truth". This is incredibly dangerous.
> A cardboard car does not function as a car does.
That's exactly the point. It is contrasting the appearance of the thing vs the structural & functional organization (logos) of the thing. Without the second, it is not the thing. One cannot establish an identity relationship just based on appearances. That is why you can't equivocate the appearance of a virtue with being virtuous.
> It sounds almost like you are saying something but you never explain exactly what’s wrong with performative virtue.
"Performative" virtue, a more accurate designation would be "demonstrative virtue", is an oxymoron. You cannot be virtuous without conforming to the structural & functional organization of the thing, i.e. without really being virtuous. Real virtue is a participatory endeavor. A display of virtuosity is like the cardboard car, it doesn't function as a virtue. Making people take down content for narcissistic reasons does not make the world a better place, because it is devoid of at least two core properties that is rationality and proportionality.
> But you have to define what is virtuous before you can claim that performative virtue is “fake”.
This topic is systematically discussed since Aristotle, and you would appreciate the absurdity of trying to give an exhaustive definition in this forum. But I've given you two properties of it that narcissistic censoring violates.
> Isn't the point of the free market that consumers are free to vote with their dollars?
Can you pinpoint the moral difference between "Google should be allowed to refuse to host content that they dislike" and "Restaurants should be allowed to refuse to serve ethnicities that they dislike"?
It is a feature of free markets that consumers choose where to spend their money; but it is also a feature of liberal societies that the law precludes the majority from driving out an unpopular minority by refusing to do business with them.
> I don't want to give my money to companies that host things that I find morally abhorrent.
Isn't that a sign of a moral panic? Back in the 90s, my parents didn't want to patronize companies that signed deals with pornographers. Even though I was only 10, that sounded ridiculous. Good luck finding any large company that doesn't.
I don’t see what’s changed. Youtube doesn’t host pornographic content either. The Apple App store doesn’t allow adult content of any kind. If you’re saying maybe Youtube and the Apple store should grow up a little and allow adult content I’m sympathetic but the horse has long left the barn on that one.
Can you explain this? Are you referring to tech only? Maybe I’m way too oblivious but of the large companies I pay money to I find it hard to imagine that most are signing deals with pornographers.
If people wont do business with people who do business with pornographers what right do the pornographers or porn aficionados have to object. If it would render it more clear it wouldn't matter if the subject were avocados. It's not a moral judgement on the worthiness of porn its about consumer choice in aggregate.
No, because just a handful of companies control the entire ecosystem. There is no alternative, you cannot escape their influence and you cannot function without interacting with them.
If Google and Apple ban you from their products and platforms, and Visa shuts down your payment processing, there is nowhere to go. You're done. This and much more has happened many times.
You're making the "just build your own internet bro" argument.
Sounds like they just need to pull themselves up by their bootstraps and work harder.
They could also accept checks via the mail. Or cash via the mail. Or cryptocurrencies. Or gift cards. Or barter. There are multiple options available to them.
Those things are all really inconvenient.. Really, who wants to go out and mail something and wait for it to arrive? And gift cards risk the same kind of banning of credit card companies. This is going to limit any business beyond inviability.
The only thing you mention there that is a serious alternative are cryptocurrencies. And those are constanly fluctuating, needing very complex hedging against sudden value changes. And still something that most consumers will really struggle with. Most will not have a clue how to obtain crypto or how to deal with it (safely).
It might work for a really highly educated niche, but not for 99% of consumers. They just want to put in their paypal or credit card details and click buy now.
>> I can do anything I want without Apple, Visa, Google and even Amazon and Microsoft.
Really? How can you even know which services or products depend in some way on GCP, Azure, or AWS (not to mention product and services that are built using other services that depend on those)?
But I think we need to examine both sides of this.
When the American concept of free speech was coined, it was valuable because you could go stand in a government owned square and communicate your message for free. It was a good balance between not forcing private companies to accept speech but still allowing the speech to happen.
Online we don't have the concept of a government run square, and so your speech can be totally stifled by private companies.
But the difference is that when you're standing in the town square shouting nonsense, your reach is constrained, your ability to reproduce your speech is low (you have to just stand there and keep shouting) and everyone knows who you are. Damaging speech just can't be that damaging. Online is totally different.
I think the argument of "Google can't censor you, only the government can" is not great because there's no gov't equivalent of the town square. But I don't think the answer is just "make Google accept all speech" or "create a gov't equivalent of the town square" is necessarily the answer either. I think we should be starting from first principles and understand what free speech is trying to accomplish and come up with a framework that helps us accomplish it.
Pretending free speech was only about the town square is ahistorical.
Free speech has always been about distribution as well - publishing a book or a newspaper, distributing pamphlets - those had similar reach to a random FB post or YT video today (in terms of percentage of the population).
Of course newspapers had no obligation to carry anyone's message, but, far more importantly, a newspaper couldn't be censored by government for printing stuff the government didn't like.
It's also important to remember that there used to exist many more newspapers - factories would have newspapers, most towns would have one or two, many clubs and similar organizations would have one.
Historically there were two modes of distribution, "publishers" and "common carriers".
Publishers (like newspapers) had full control over their content, and also had full responsibility for it (e.g., if they printed something libelous, they could be sued).
Common carriers (like the phone company) had no control over the content, and no responsibility for it, either (you couldn't sue the phone company if someone used the phone to plan a crime, for instance).
Google and their ilk want to have it both ways. They want the full control of publishers, and the zero responsibility of common carriers.
Historically, power without responsibility has invariably been a recipe for abuse.
I think they should have to choose one or the other.
That’s the way it was until the passage of the communications decency act. Early social networks found themselves in a bind in that if they tried to moderate for say, spam, or porn, or copyright infringement, they were then liable for everything their users published. The CDA was an attempt to solve that problem.
The impact of that is asymmetrical, though. A company that's making 1% profit (thinking a rural TV station here, not necessarily Google) can't afford to lose 10% of it's viewers, so they dial down their programming to the least objectionable possible. That's how we get in a situation where most people want to watch Breaking Bad or Game of Thrones but what they get are Brady Bunch reruns.
There's little free market for the consumers in tech in general. The barriers to entry are extreme - between inherent complexity of the products, enormous capital investments required, and scale-related benefits (economies of scale, network effects), few can afford to start a real competitor. You can't just make a new Facebook or a new smartphone or even a new coffee machine - not without making a faustian bargain with investors, a deal which is usually the root problem behind why technology sucks.
Consumers only get to choose out of what's available, and in this market, it's hard for an entrepreneurial consumer to make some things available when the market isn't providing them.
The analogy to voting to flawed in that actual voting is much too powerful, because it involves politics and the use of force. When you "vote" with your dollars your choices are to either support something or ignore it and spend your money somewhere else. The amount of support you can provide is limited by the economic surplus you've created, and no matter how much money you amass you can't just shut down something you dislike as long as other people continue to support it. Voting in the political sense lacks these safeguards: a majority (or vocal minority) can subsidize their pet projects with the opposition's resources, or prohibit harmless activities merely because they find them distasteful.
What? So because someone or something has more money than us, we should just say screw it and buy products we are morally against? That makes no sense.
> Edit: Within the law. To be honest, there is very little I see that should be censored beyond CP.
The thing is, people are never going to agree where the line is drawn. So I’d rather let individual companies decide where they draw the line and if that happens to be not where you’d agree, then you can go support an alternate, if this there are many, who draws the line elsewhere. And for those who don’t want a private entity owning the infrastructure, there is always solutions like IPFS.
> The thing is, people are never going to agree where the line is drawn.
True, but I currently trust the line drawn by the law orders of magnitude more than I trust the one drawn by a bunch of faceless reactionaries at a handful of megacorps.
Also, theoretically, I can participate in changing the law when it's either insufficient or overbearing. I have zero say in the corporate policies of the day.
If you can determine that a particular piece of information belongs to some unpleasant or dangerous category, mark it as that, like a spam filter does, or vaguely how FB does with posts. Importantly, do not remove it unless the law forces you to.
Then let users switch between the filtered and unfiltered view, or look at the analog of the "Spam" folder.
This allows to study pieces of information that are deemed "controversial" or even "malevolent" and make your own opinion, if you're so inclined.
Even more, it could apply several different cultural filters, like "content likely offensive for X", where X is a major religious or cultural group, much the same way as many providers currently mark content as inappropriate for children.
and then there will be a million "Top 10 things the government doesn't want you to know" videos that amplify whatever it was they were trying to censor
The Govt pontificates about big tech issues without putting in the effort of making laws. Big tech is forced to anticipate what might defame them or be used by the Govt as ammo to garner public sympathy. It is a delicate balancing act between censorship and freedom.
Zuck tried to put the onus on the Govt to define right and wrong, true and false, but to no avail. It is hard to please everybody unfortunately.
I really don’t see the hard part. Just because we consider something to be harmful and negative (like propaganda) doesn’t mean making it illegal is desirable.
Even for the things that are illegal, it can be illegal to publish something but legal to provide connectivity services to the offender.
I apologize if I missed your point, but in the case of fake news for instance, iiuc, you're arguing that fake news on FB is legal and that FB isn't responsible for such content on their platform.
Information warfare is fairly real and potent in how it has weakened US democracy and vaccination efforts recently. Ultimately, arguably, this has real consequences for the economy, national security etc.
I am not saying that FB is not responsible. I am saying that FB should not have their services blocked by ISPs or government because of fake news on their platform.
You're correct in your observation but state-sanctioned censorship is not the solution.
In case we're talking around each other; what kind of laws would you like to see put in place?
> So user generated fake news is ok, and so is foreign propaganda influencing elections?
I mean... yes? People are ultimately responsible for their own worldview and vote. They are also perfectly entitled to consume any information from any source they please, aren't they?
A side note: foreign propaganda is not worse than domestic propaganda. Swarms of dedicated activists using rioting and other violence to achieve political goals (the dictionary definition of terrorism), in collusion with a sympathetic tech industry that suppresses any dissenting thought, is propaganda and it does influence elections. When these companies do things like ban Trump, they are illegally making a campaign contribution without abiding by the laws of campaign finance. When they do things like censor discussions of the lab leak theory, they propagandize the entire world by freeing the CCP of the bare minimum for accountability.
I am not a fan of any approach here and largely agree with you.
Media censorship did very well exist in pre-tech journalism as well and was more blatant. Eg. Iraq war WMD claims etc. Big tech censorship mirrors this but in a very limited manner since we know it is happening at any point in time and such information is available outside of the major platforms, which wasn't the case pre mainstream internet.
Ultimately, big tech regulation of content is similar to how a WSJ or NYT would manage what they publish. They've been forced into this position with all the criticism over the last few years - it isn't something they wanted to invest in. It looks more like censorship because of the stepping back from the previously laissez faire approach to content, whereas NYT's baseline was self regulation (so it wasn't as apparent). Big tech media is privately held like a bar or a restaurant and in my opinion have every right to control who is on the platform and what is not ok to say - if one doesn't like it - they can leave.
If you live in a more populous state like California, you have much less voting power in the Senate and somewhat less voting power in the house and the executive office.
I can much more easily leave a company that I don’t like than the government. A company does not have the power of the state to impose its will on me.
This is the same government that less than a year ago wanted to come down on Saturday Night Live for being mean and is passing laws on the state level right now to forbid teachers from teaching history that doesn’t conform to it’s world view.
It never ceases to amaze me that people want to give the same entity more power that can and will actually take your freedom away.
> A company does not have the power of the state to impose its will on me.
No, but it may have the economic power of a (near or total) monopoly to do that which - depending on the time and the place - may be more powerful than the state.
I don't think it's hard to extrapolate from current conditions. Corporations are currently quite powerful - they control the main lever of politics, which is money. They are already using those levers to bend governments to their will, and utilizing pet governments to produce favorable tax conditions in certain countries to reduce their contribution to others. Eventually, there will be small governments that are almost entirely puppet states of large corporations. Eventually those corporate states will grow and consolidate and you'll live in a world where corporations have as much or more power than some small legitimate countries, then some medium ones, then eventually some large ones, and finally more power than anyone else.
Corporations like amazon and google might not be self-perpetuating due to the entropy concentration inherent in companies that get that large but imagine if you had a organization of that size controlled by an AI officer team. It'll happen, just a matter of how long it takes.
So I listed all the things a powerful government can do - control the media to enhance their worldview and threaten to take away private party, arrest and kill innocent people with immunity, take away freedom over petty crimes, etc.
And I should be more worried about corporations evading taxes?
I can tell you I’m much less worried about walking into any of the Big Tech companies and being treated unfairly than I am driving down the street and being harassed by instruments of the government.
As far as using AI to discriminate. Law enforcement and the government has been discriminating for hundreds of years. They don’t need AI to decide to harass me.
The easiest answer for all these problems is decentralization and choice. Google and other tech companies are effectively governments. They hold power and influence over billions, are insulated from competition by network effects, and also regularly act in monopolistic ways. They need to be broken up and regulated.
Can any of these companies arrest me because I “fit the description” or take away my life and liberty? It’s a fact that because of the way that the US is setup between the electoral college, two senators per state and gerrymandering it’s not “majority rule”. This isn’t political. It’s the Constitution as designed.
Until tech companies can take away property via imminent domain, money via civil forfeiture, or put me in jail, they are not anything like the government.
A. Private companies cannot force you to do a single thing! Governments have entire agencies and departments dedicated to keeping people they deem dangerous in line, using a list of powers up to and including the right to take their citizens lives, if they do choose. No private company has this kind of power! Not even close!
B. People are not empty vessels, free to be molded by wiley corporate villains. People possess values and opinions, and (much to the chagrin of both corporations and politicians) it is very difficult to change people's minds.
I’m a European so I’m usually one of the first to defend legislation of companies. But when it comes to censorship that should entirely be down to the platform. Some platforms might have a no nudity policy, some have a zero trolling rule. Others call fair game to all of the above. Different people like to engage on different levels: some are into high brow arts, others just want to trade stories about getting smashed on drugs. And there’s a whole plethora of interests in between from mums talking about the different shades of brown their kids have squeezed out this week to teenagers talking about nipple slips. None of them have any less right to talk in a safe space than the other. So why should there be a law disclosing what people can talk about?
Ok, I agree that Q Anon, anti-vax chat and so on and so forth causes more harm than good. But is it really the governments place to dictate that? What if the government was the one setting these narratives (not that hard to imagine given Trump and Johnson are two of the biggest serial liars in western political history)? Remember Trump threatened to shut Twitter down because Twitter fact checked Trumps post about election rigging.
For free speech to work, you absolutely need companies to be responsible for what they consider acceptable content rather than the government to dictate it.
" Some platforms might have a no nudity policy, some have a zero trolling rule. Others call fair game to all of the above. "
If this all came with a duty of interoperability, I would be fine with that. But walled gardens that do their utmost to make leaving them really unpleasant and possibly expensive to the users ... that sounds a lot like coercion. And European law usually recognizes that individuals coerced, even softly, by large businesses, need protection.
That’s a separate problem though and should be handled like so. Dictating in law what content independent platforms can and cannot disallow doesn’t solve the walled garden problem. It just places greater challenges on new entrants / would-be competitors while still allowing walled gardens to exist. Plus you also then lose the diversity of TOS that might attract new users to new platforms. It’s a lose/lose outcome that doesn’t even attempt to solve the problem you’re identifying.
The problem is that these companies are open and free until the point where they gain a significant network effect. Once they are in a position to aggressively eliminate competition they start appointing themselves as the arbiters of truth.
I agree that they should be able to change their policies but if they are going to edit content then they should be treated like any other publication that has editors and be held legally responsible for the content that they publish.
A better approach would be to give the user an option to select between filtered and unfiltered content on install with the default being unfiltered and the filtering being provided by the users preferred third party entity.
The great thing about the internet is if you don’t agree with Facebook, Twitter or Google’s TOS you can use another platform or even publish your own platform.
People might talk about FAANG as having “monopolies” but the web was built on independent Joes hosting personal websites and if the content is good then people will eventually find their way there. And we’ve seen the rise of other social platforms precisely because people have felt they wanted to communicate under differed TOS, so this isn’t even a theoretical point.
No, they should host everything until legally obligated otherwise. If CP was legal they should host it, and if you have moral qualms with that notion you should write your local representative and let them know what content should be regulated.
People join specific social groups because of conversational bias. It might be a mums group where the content is mainly baby related. Or a theatre group with talks about fine arts. Or a retro gaming group that talks about old computer games. They will have policies in place to ensure the content of their group stays focused. Some groups will be aimed at families or being safe for work so might have a no nudity rule. Some might have a no trolling rule because they want a friendly atmosphere. Some might have a no sales rule because they don’t want their group to turn into yet another market place. These are all TOS, censorship rules if you will, that are placed in specialist groups.
Your reasoning would say I could join a kids cartoon group and post extreme pornography because it’s legal and if any parents object then they should have the law changed to ban porn for everyone.
Quite obviously that’s a dumb way of managing online content. Let the platforms manage what content they deem appropriate for their specific audiences and if you really want a zero-censorship community then you personally should join one rather than forcing every man, woman and child into wading through the same content you personally enjoy.
The same way we regulate any business and mandate they abide by regulation. Or the same way we force phone companies to be regulated as common carriers. The same way your power utility cannot cut you off based on your political opinion. All it takes is political will.
I'm not entirely sure that's the best model for today, but that's the historic model for a private business that's compelled to allow all legal speech. That's why UPS and FedEx still have to deliver to the New Order (the current incarnation of the American Nazi party).
I partly agree with you. But who gets to make those decisions at Google and what happens to your data when they do? These massive corporations have found that they don't need to value customer support and that includes immediately blocking you from everything you have at their discretion.
Does it really matter who? The point is it is a corporate term of service and if anyone doesn’t like it then they’re free to use another platform (of which there are many).
> what happens to your data when they do?
That’s a more interesting question. In an ideal world everyone would have offline backups of anything posted online but clearly that’s an unrealistic expectation. I’d hope the platform would offer its users a path to migrate off, even if their account has been publicly banned. Sadly history has demonstrated that’s almost never the case.
Maybe that is where the government legislation needs to be? Stating that banned accounts have a grace period to back up their content?
> These massive corporations have found that they don't need to value customer support and that includes immediately blocking you from everything you have at their discretion.
I agree but that’s a tangential point and legislating that Google et al host any content and all legal content wouldn’t fix the customer service problem.
Well then 00-10 was the platinum age. Few behemoths, lots of smaller companies, less focus on marketing and money, fewer idiots online, a focus on hosting your own stuff from 0.
But I agree, platforms should be neutral. And people should still strive to control as much of their online properties as possible. Not just give it to Amazon and Google.
> Well then 00-10 was the platinum age. Few behemoths, lots of smaller companies, less focus on marketing and money, fewer idiots online, a focus on hosting your own stuff from 0.
Eternal September was 1993. maybe this golden age of the internet where everyone was civil and educated and almost nothing was subject to censorship didn't really exist.
To you it's a golden age, to others this is a period where the internet was thoroughly weaponized by misinformation agents to undermine democracy and civil societies around the world.
> to others this is a period where the internet was thoroughly weaponized by misinformation agents to undermine democracy and civil societies around the world.
If you said something like this in the 1940s, you'd get a bunch of people nodding their heads in agreement and using it to justify the McCarthy hearings.
Maybe I'm naive, but I still believe the lesser of the two evils is to err on the side against censorship.
Also, if you've spent any time with crazy conspiracy theorists, you'd know that letting the government/corporations/media act as censors just adds fuel to that fire.
Conspiracy theorists have spent the last few years literally using the term "the good guys" to describe the federal government, and were clamoring for a military coup up until earlier this year.
The conspiracy theorists I know, and most of r/Conspiracy, went from distrusting the government to generally converging on Qanon conspiracy theories, and Qanon adjacent conspiracy theories like #SaveTheChildren.
They use the term "the good guys", literally those words, to describe those in the federal government who are supposedly fighting a secret war against a pedophile cabal that will culminate in a military coup.
Interestingly enough, even those conspiracy theorists who haven't fallen down the far right conspiracy rabbithole are also assuming the federal government and military are on their side. A popular conspiracy theory is that the government is gearing up to disclose that UAP reports are actually evidence of alien contact. In those that hold this view, there's a widespread reverence for military members' testimony, and an almost implicit assumption that they are nearly infallible because of their military training. Instead of distrusting the government, they're eagerly awaiting for it agree with them and confirm their personal feelings towards reports of UAPs.
But isn't the flip side of all this censorship a reduced ability for people to tell bullshit from reality?
Why is information intelligence not taught in schools?
Much like an immune system, we need to be exposed to nonsense so that we're constantly vigilant. There will always be a group of nutters who believe in flat earth, that vaccines cause autism, etc... Trying to cut that off at the source just throws these people into underground cults. A more scalable, sustainable solution is to
1) teach people to do their research - properly, as in, don't go on Facebook and join "Flat Earth Society Boston" to find The Truth
2) teach people it's okay to change their minds - part of this spreading cultism is that political opinions are now core identities
3) teach people to tolerate opposing viewpoints, even the silly ones - point and laugh, but don't try to cancel and destroy their lives
Another thing - every time a tweet or document is censored, the replies are generally cut off as well. How can people learn to distinct true and false if they don't get to see examples of people being wrong and corrected.
>But isn't the flip side of all this censorship a reduced ability for people to tell bullshit from reality?
No. People have limited processing cycles in their heads, and you never notice the bullshit you fall for, so you can't correct for errors you're making. Critical thinking skills are great, but they don't provide you expert-level knowledge in every field, nor can they. Sometimes your educated heuristics are just plain wrong and someone else has better information you don't have access to. I see this all the time here with content in my field - developers just get law wrong all the time.
You might remember an era when email inboxes were FLOODED with a deluge of dick pills, get rich quick schemes, Nigerian prince scams, and other low-effort, low-value content. Sure you might be able to avoid clicking on garbage, but the general health of your inbox declined dramatically.
Does that mean society is going to end because we've trampled upon the rights of the latest Cialis replacement to spam my inboxes? Probably not.
You have Johnny, saying you should wear a mask when you go to the store, because it helps you against covid. And Bobby says you shouldn't, because there's no evidence it helps, and it might even be even worse for you, if you wear it wrong.
So you're suggesting we should remove Johnnys fearmongering, conspiracy post, because we should listen to the experts?
Experts clearly say that "There is no specific evidence to suggest that the wearing of masks by the mass population has any potential benefit. In fact, there's some evidence to suggest the opposite in the misuse of wearing a mask properly or fitting it properly,"
Expert knowledge by WHO.
...and a month later, we should delete Bobbys post too? Do we undelete Johnnys post?
This is actually pretty interesting because your post is premised upon a massive misreading of what actually happened. It shows you can spread disinformation while thinking you did your research.
The expert knowledge was that mass usage of masks early on in the pandemic could prevent frontline healthcare workers from accessing needed supplies while logistics spun up to meet demand.
"There also is the issue that we have a massive global shortage," Ryan said about masks and other medical supplies. "Right now the people most at risk from this virus are frontline health workers who are exposed to the virus every second of every day. The thought of them not having masks is horrific."
This is literally the third paragraph.
When community spread began to drive the bulk of new infections and we've had months to spin up production on masks, obviously mass adoption changes in value.
I literally quoted the paragraph, where they said that there's no evidence to suggest that the wearing of masks by the mass population has any potential benefit.
They didn't say "it helps, but we're unable to call wallmart and buy all their stock, so we're asking you not to buy them, so we can", they said that there's no evidence to suggest that the wearing of masks by the mass population has any potential benefit... those are two different things.
The very next statement by Dr. Ryan "There also is the issue that there is a massive global shortage, and where should these masks be, and where is the best benefit? Because one can argue there's a benefit in anything, and where does a given tool have it's best benefit? And right now the people who are most at risk are frontline health workers [...]"
The follow up statement is also very emphatic that this is about mask allocation due to constrained supply. I don't get why you're trying to ignore the very clear context of the statement.
There was evidence of the effectiveness of masks. They chose to diminish/ignore that evidence because it was inconvenient to protecting the supply of masks.
There were two studies circulating around that time. One of passengers of a bus and another of a restaurant. The bus one found that the passengers wearing masks did not catch the virus and many of those that did not wear masks did catch it. The restaurant one found that people in the flow of AC air caught it and those not in it did not. That meant that it was airborne and there was some evidence suggesting mask effectiveness.
Why should you listen to the CDC, WHO, etc… when there is a better predictor of reality?
You seem to be making an argument from authority by leaning on experts, and I don’t fully disagree with that approach either. But trusted authorities regularly betray trust, and use their label of expertise to push their own agendas. A recent example is found in the false attribution of the PNW heat wave to climate change (https://cliffmass.blogspot.com/2021/07/flawed-heatwave-repor...). They also can be wrong solely due to making a mistake (COVID had many examples of this with rapidly changing guidance). So you aren’t free from the need for critical thinking skills, because in the most important matters you have to still challenge them. To be able to do so, you need to have trained that muscle beforehand.
>You seem to be making an argument from authority by leaning on experts
I don't think that's the core of what I'm saying. I'm saying people have limited mental time, so devoting an unlimited volume of time to sorting through bullshit is not feasible.
Everyone is going to need to make choices, but if statistically the options presented before them are better, we'd expect better outcomes in general.
'Critical thinking' is one of those things that people keep raising as the catch all solution. This line of reasoning states that it doesn't matter what options are on offer because people will calculate the best ones! Unfortunately, they don't.
Most of our language fluency tests rate people's skills in this area; take the ACTFL scale for instance. The sad reality is that when people are provided with language based reasoning testing, many people perform fairly poorly due to common errors, even in test-based situations. People misread statements, misunderstand their meaning, have trouble moving from specific to general or vice versa, have difficulties tailoring their message to their audience, etc.
In general, the very HIGHEST level of linguistic ability in specifically tested scenarios is what we assume out of everyone as the baseline when having these discussions about social discourse. This is an unreasonable starting point.
I tried reading the article you linked to. I honestly can't get past the intro:
As noted above, the first bullet of the main findings states that the heatwave was "virtually impossible without human-caused climate change." Sounds very certain, doesn't it? Virtually impossible.
Then read their next bullet:
"The observed temperatures were so extreme that they lie outside the range of historically observed temperatures. This makes it hard to quantify how rare the event was"
On one hand, they say it is hard to quantify how rare or unusual the event was, but on the other, they claim the event was virtually impossible without human-caused climate change.
Both statements can not be true. You can't be uncertain and certain at the same time.
What? The writer seems intent on purposefully misunderstanding the study. "This makes it hard to quantify how rare the event was" is equivalent to the situation of not being able to speak about a "100 year storm" because there haven't been any storms that strong in recorded history. In other words the data is so different from historical data that there's only one reason why: human-caused climate change.
If you can’t get past the intro, I would say respectfully, you’re not giving it a fair chance. You should read the full post and the underlying study being critiqued before judging it.
> In other words the data is so different from historical data that there's only one reason why: human-caused climate change.
No that’s not the case. This PNW event would have happened with or without climate change. The study being critiqued used a hyperbolic claim that the event was “virtually impossible without climate change” even though their own data shows it was virtually impossible (highly improbable) either way, and that it was more due to a rare coincidence of many factors. The Professor who wrote this post I linked also has prior posts analyzing this event and showing that really climate change contributed a few degrees to the peak temperatures, but that it would have been a record breaking event either way.
As you can see here, the purpose of climate change denialism isn't to convince anyone. It's just to delay serious action for as long as possible using handwaving and appeals to authority. Here, the fact that the author is a Professor [sic] is used to add weight to his arguments, even though a vast majority of "Professors" acknowledge that climate change is real.
Their next claim is that the June heatwave was enhanced by 2°C by global warming, which is not out of the realm of possibility.
But think about it. Considering that they state that the heatwave had maximum temperatures 16-20°C warmer than normal, by their OWN ADMISSION only about 10% of the heatwave was the result of global warming. Thus, a record-breaking, unique heat wave would have occurred without global warming.
Imagine if they had stated that. You would not have seen many headlines: Global warming contributed 10% of the heatwave!
This guy is frankly so wrong and misunderstands what he's talking about so badly that he should be completely ignored and you should not cite him any more.
Imagine that global warming dries out a forested area to the point where it catches fire due to being so dry. This guy is saying the equivalent of "the fire burned at 800°C, and global warming only accounts for 0.25% of that!"
I gave the article a fair chance, and facepalmed repeatedly at his inane arguments. He's a crank and you should ignore him.
Sure, but in the meantime the misinformation voters get to pick the textbooks. This is a bit like educating people in fire prevention when the forest is already ablaze.
> Much like an immune system, we need to be exposed to nonsense so that we're constantly vigilant.
No: much like an immune system, we need to be exposed to vaccines (i.e. education on how to spot deception, knowledge of what scams are currently going on). Enough people trying to deceive you, and eventually someone will succeed.
Indeed, and that was also the period where Reddit happily hosted a whole bunch of extremely tasteless and borderline illegal communities centered around things like pictures of overweight people and sexualized children.
A quick googling suggests that the first wave of closures was in 2015 [1] after a crushing wave of negative publicity and advertiser pullouts. The other really high-profile one was r/The_Donald, which wasn't closed until 2020 and even has its own Wikipedia article. [2]
> But certainly prior to June 2015, there was much, much broader tolerance for racist, sexist, transphobic content on the site.
And now it has moved to other, more hidden platforms, where there is noone to write counterarguments, and sometimes (tor, freenet,...) impossible to identify someone who writes actual threats and not just "yo momma so fat..." jokes.
I mean, that's literally what most of the posts are arguing for in this thread— that this needs to be a watershed moment to get serious about distributed, uncensorable alternatives to products like Google Drive.
But in any case, "people will find alternatives" has never been a valid reason not to act (either here or in other popular cases such as guns/suicide). There is real value in having standards of conduct that go above and beyond the bare minimum of "not illegal." Moral and ethical value, of course, but also economic— in the reddit case, ultimately being a place that was viable for ad spends by mainline brands who wouldn't want to be associated with a site whose public image was that of being a safe space for hate speech.
Google is a little different since there's no r/all page for GDrive that can be gamed to show this content, and nor is Google likely as worried about the safety of its reputation as an advertiser.
I think this is one of those things that sounds nice on the surface but isn't really tenable in reality.
Should they only host or also surface "everything"? Is deranking something censoring? What about not promoting something? If the idea is every thing is given equal weight, then things pretty quickly becomes a cesspool.
+1 I have never been “outraged” by something posted on the Internet. Disgusted, disappointed or shocked, sure. But in no case did I think the hosting company was somehow responsible for it.
> Between 2010-2016 was the golden age for these companies actually being free and open.
It was in the early part of the 2010's that Google, Twitter and others started censoring Islamic content in the name of antiterrorism and stopping the spread of extremism on their platforms.
A straightforward reading of the 1st Amendment indicates that there can be no such thing as illegal content under US law, including whatever you are right now considering proposing as an exception. An act involving speech may be illegal; if you make a credible threat of imminent, irreversible harm others are free to take you at your word and defend themselves—the justification here is the harm which is reasonably expected to follow, not the content of your speech. If you lie to someone to obtain their property under false pretenses, knowing that the lie precludes any "meeting of the minds" and thus renders the contract invalid such that the property still belongs to them, then you are committing theft. Your punishment derives from the act of taking property which did not belong to you, not the fact that you lied in the process. And of course what you say may be used as evidence against you without the speech itself being illegal.
There are some more problematic areas where the Constitution itself is inconsistent. Copyright should not exist, for one; the core concept is utterly incompatible with freedom of speech. The Supreme Court even recognized this at one point—it's why we have the concept of "fair use" in the first place—but "fair use" is a poor compromise which does not fully negate the infringement of the freedom of speech. When you have one clause saying that Congress has the power (but not the obligation) to do something which would infringe on the freedom of speech, and another clause later passed as an amendment saying "Congress shall make no law... abridging the freedom of speech", the obvious reconciliation of these clauses is that Congress is barred from exercising that enumerated power because it would violate the later amendment. The Court tried to strike a balance instead… but it still amounts of Congress passing a law which abridges the freedom of speech, despite the limitations imposed by the Court.
There was one other notable exception to free speech that undermines your point: obscenity is not, or has not been, considered speech under the terms of the free speech amendment. So, at least historically, there is some precedent for considering that some forms of human expression can be censored on their own merits.
I should note that I am anti-censorship and consider such laws absurd, but my point is that we can't rely on readings of the constitution to self-evidently protect us from such things.
Many poor precedents are grounded in strong emotional reactions at odds with basic principles, and in my opinion obscenity cases are a good example of that. Still, the classification as obscenity is more about the form of the speech than its content. In fact the more content there is, in the form of either expression or artistic value, and the more the form contributes to accurately conveying the content of the speech, the more likely it is that the speech is considered protected, even if the form would otherwise be considered obscene. I'd rather the courts didn't get involved in trying to decide whether a controversial turn of phrase has enough merit to warrant protection, but in any case I don't think you can extend the principles underlying prohibition of obscenity to exercising control over which information can be conveyed.
obscenity is not, or has not been, considered speech under the terms of the free speech amendment.
What part of "shall make no law" isn't clear? It's true that obscenity is an exception to 1A, but that's something that some people made up after the fact, in direct contravention to the framers' stated intention.
The founders weren't short on ink. If they had wanted to equivocate, they were certainly free to do so. The fun really starts when some people decide that their (least) favorite amendments are subject to less (more) interpretation than others.
The law where? That is part of the problem. Every piece of content would need to have hundreds of flags, one for each jurisdiction in which the content could be viewed. That may not be enough for some countries given how easy it is to fake your origin address. It's solvable but not an easy problem.
I say allow even CP. Take the whole system of control right out of the equation. Make the service 100% free, like sunshine, warming saint and sinner alike.
Then, after we have that working, and we want to remove the CP, find another way. Do it on the client side or whatever.
> Edit: Within the law. To be honest, there is very little I see that should be censored beyond CP.
Self harm, terrorism, revenge porn, fabricated news. Just to name a few.
Internet is very different from what we experienced in the 90s. The ingress barrier guaranteed good content (or at least entertaining content). Now the barrier to content submission has been lowered so much that really anything makes on the Web and this is not good. There are reasons, after all, for having locks on entrance doors, right?
I am quite happy that Google "got the message" from regulators that misinformation is a real danger and we should apply zero tolerance to web polluters.
> To be honest, there is very little I see that should be censored beyond CP.
And, just to make the point of how hard it is to get consensus here, I'll disagree with you about the CP under some conditions.
I think others will too.
For example some would say that voluntarily made CP (e.g. 17 year old's nude pics) should be uncensored, others might just include an exception for simulated images, still others would say allow everything as long as there's a very low likelihood of victim or victim's connections stumbling across it and no money is changing hands.
It seems like consumers can't agree on why they are upset with these companies. I don't even think we can agree that a private company shouldn't be making decisions about what information should be allowed or removed.
In America it shifts it to the US constitution, which provides a more principled approach to speech than the biases of tech companies. Is it a perfect solution? No. But it is closer to it than the present situation.
Very little beyond CP seriously. Why do you think the sufferinf of an abused child is somehow the only horror we should censor.
Religious domination, violent intimidation, subtle suggestive manipulation, outright marketing lies, even government propaganda should all be censored to some extent to just be within the law
These plarforms must be editors and yes it means less trash / second being published.
>Within the law. To be honest, there is very little I see that should be censored beyond CP.
Within the law of which country?
Should copyrighted content be blocked in the US? What about in the Netherlands?
Should Holocaust denying content be blocked in Germany? What about in the US?
Should anti-CCP content be allowed in China? What about outside China?
If you want to do the bare minimum according to the law, you are going to need a different implementation for every country. And even then people in countries with more lax laws are going to think you are acting unethically by censoring content in more restrictive countries and vice versa.
EDIT: I have no idea why a comment that amounts to "different countries have different laws" is being downvoted.
If you want to operate within a country you should follow the laws there. If the laws are immoral, you shouldn't operate there, or you should accept them and make money without being moral. That choice is up to the company. It's why google building AIs for china should be controversial.
This is reasonable as an ideal, but could be harmful in practice. A significant percentage of the U.S. population believes inarguably wrong and demonstrably dangerous things at this point. It is possible that the only effective way to fix that is corporate censorship. That wouldn’t make me happy, but I’m not going to agree with letting a crazy person steer the Titanic into an iceberg just because it’s their right to do so.
Ah, but how do you know you're not the one believing crazy wrong and dangerous things? All those terms are highly relative and if your answer is argument by authority, well ...
But none of those areas are the ones where people get worked up about misinformation. Unless you have been told there is a "consensus" about things like COVID, vaccines and climate change, where there most certainly isn't?
Not commenting on covid or vaccines, not my field, but climate change for one has pretty much been established to a great deal of accuracy (that climate change is man made)
Even someone like Senate leader Mitch McConnell isn’t denying it anymore. Research is still ongoing to what extend we are going to be impacted.
So, even if someone is denying climate change or reading misinformation, it doesn’t change that man made climate change is here, and what its causes are.
So yeah, there is scientific consensus in that area on the broad perimeters.
Now if you believe something different, that just goes against what we already established!
"Man is changing the climate" is a very weak statement if taken literally, and not really what people mean by the term climate change. Of course man has some sort of impact on the atmosphere, as we do on all aspects of our environment.
Once you get into questions like, by how much is it changing, are those changes a big deal or not that serious, by how much does it really affect the weather, even what the actual history of global temperature is, there is a lot of disagreement even amongst scientists, although of course given the tiny size and cliquey nature of many academic fields, criticism from outside the field must always be considered as well.
Those are pretty much already answered, and by many people. The Wikipedia link goes into that :) Data suggests we are currently looking at 2 degrees temperature change as a global average.
While everyone is free to come up with different answers, there isn’t anything credible at the moment.
A friend of mine programs climate models based on latest mathematical insights and data. For 10 years he would put his hand in a fire that it is happening and it will be bad.
Point by point :
How much is changing? 2 degrees hotter on average.
How does it affect weather? More outliers such as the recent heat wave in the NW of USA.
Is it a big deal? Yes, because it unbalances a lot of eco systems and our ability to cope with it.
History of global temperature? Has been measured for hundreds of years now and we can deduce temperatures before that.
There is a lot of disagreements between scientists? No there isn’t (97% banks on man made climate change)
Clique nature of academic fields? That’s an entirely different topic and doesn’t change the data.
I guess education and honest information would be too radical an approach.
Governments and leaders are concerned that people don't trust them, yet the truth is, they don't deserve to be trusted. When the system is designed to create a placated populace instead of critical thinkers, those in charge are routinely lying and blatantly misleading instead of informing, then it's no surprise people will believe in all kinds of fringe ideas.
Hiding and shunning information can be a temporary band aid, but the inevitable effect is that people will trust official sources even less.
>A significant percentage of the U.S. population believes inarguably wrong and demonstrably dangerous things at this point
Yes. Both sides can agree that they think the other believes in falsehods. Since one person has one vote, there is effectively nothing you can do about it.
If I host a document for a certified notadoctor telling you that you should treat your children's autism by feeding them bleach which will certainly constitute abusing all of them and perhaps killing some do you think online marketplaces of ideas should ignore the fact that half the population is dumber than dirt and the dead kids and keep serving up poison?
The case I gave is not in any way hypothetical there were many popular actual self published ebooks on amazon instructing you to abuse and possibly kill your kids with bleach to cure their autism.
I'm not going too far I'm speaking empirically. Almost 1/4 of the population has an IQ less than 90 and is empirically challenged and a substantial portion of the remainder including those with reasonable or even high IQ are completely dysfunctional because regardless of how functional a brain they were born with they have basically ruined it by training it only to consume and create trash.
One has only to talk to a large enough number of your fellow humans to realize half of them are in fact dumber than dirt. If there weren't literature about bleaching your childs insides to cure their autism or other insanity of the same grade would find no takers.
If you find the number of bleach swillers insufficient consider thatnearly 40% of us in America believe that a genie created the earth less than 10k years ago. Overwhelmingly this is because they do not possess the intellectual aptitude to dismiss this theory. If their brains were highly functioning they would do so despite conditioning. Plenty of people will live 70-100 years and pass away in their hospital bed without ever ever having turned their brain into the on position.
On the flip side others aggressively question the reality they are given but because they lack the inherent intelligence or have spent their entire intellectual life consuming the equivalent of junk food they are utterly incapable of discerning the difference between insane fantasy and truth.
The comment provokes a thought however, ugliness aside.
Most of us here are presuming (I presume) that we are immune from misinformation, disinformation etc. Why is that? What quality distinguishes we,the observers,from they,the victims?
It seems obvious that education might be the decider. But I'd like to know. What quality of the HN reader distinguishes him from the victim of misinformation?
There is plenty of room to have a nuanced conversation about different viewpoints while also taking down obvious lies and crazy. The choice is not between moderating everything and nothing.
We are all vulnerable to misinformation that affirms our existing biases or that comes from individuals/organizations that have either previously been reliable sources or we have incorrectly regarded as reliable.
If the Washington Post ran an article that stated that a former NASA scientist believed the rate of climate change was vastly higher than previously anticipated I would probably buy it.
If it later turned out his specialty was Chemistry, he had been fired for using his expertise to make meth, and his research was bunkum I would have to eat crow and watch that publication far more carefully.
On a more realistic note I believed to my chagrin that Iraq had weapons of mass destruction and reading of some of the awful crimes of Saddam I thought not going in would be a great act of moral evil as it would mean abandoning the citizens to be victimized by a monster. 21 year old me didn't realize there was no good proof of WMDs, that we would kill half a million of them, and that we might well leave them no better off if our efforts collapsed shortly after we left.
Insofar as what separates the reasonable from the rubes
- A modest amount of accumulated understandings about how history,science, math, stats etc work sufficient to reject obviously untrue statements
- The understanding that everything you understand or think you know ought to be criticized and revised over time in response to new evidence. Valuing truth over authority and conformity.
- An understanding of the common failure modes of logic and reason so that you can recognize bullshit when you see it
- A reasonably strategy to use all of the above to evaluate sources continually to see if they are and remain trustworthy.
What we really all need is the right degree of epistemological humility. I think I’m better than average at discerning misinformation yes, but know that I have been wrong in the past so never weight my conclusions 100%. Its mostly people who live only in the political/social and never have to bang their heads against the hard truth of physics/nature who have complete certainty in the corectness of their positions.
Host absolutely everything? Instructions on creating explosives, chemical and biological weapons? Some future doomsday weapon? Would you say everything should be available up to and including methods for any unbalanced individual to single-handedly kill thousands or millions? How about your personal details, ID, address, employer, medical history, surfing, shopping habits? Would you even be ok with hackernews piercing the veil of your "throwaways885" username and publishing it?
Platforms are not publishers. The publisher of these things should face legal action (including the removal of their content). It's not for the platform to pick and choose.
Can you define "platform" in this context? Is this a legal term?
It's interesting to see this dichotomy between platforms and publishers in these types of threads. I assume it stems from a reading of Section 230 somehow, but the word "platform" never appears in that text.
This is what I think people often miss in those discussions. There is an assumption that everything shared/'hosted' online is someone's opinion and thus should be allowed to be shared.
This was never a case on the internet, don't trust what you read online. What should be regulated is what can be advertised, because most what you read is indistinguishable from an ad. Smoothies cure cancer - an ad or someone believe some antivax type bullshit? What if it's an ad to vote for a particular party because the other will do unbelievable bad things? Fanatic or serious opinion? Should you allow those kind of post on your platform?
In my opinion you should just because it is the only way to make sure that your platform is not taken seriously and will prompt people to read something a bit more serious. It will stir controversies on twitter but who in IRL seriously considers an opinion of a twitter person?
Edit: I went for a smoke I thought about it a bit more, take a look at voat vs reddit. Voat was created as a response to censorship on reddit and look at the cesspool of a place it is. Companies do not censor content because they have a moral stand or a political agenda, they moderate the content because otherwise it will turn into a shit you wouldn't believe (HN does it to). There is much more trolls on the internet responding to everything they can than legitimate people trying to have a discussion.
New Eternal September started with social media and those not hardened by the internet before have a hard time to just dismiss what they read as 'a troll'
> we're all outraged when these companies host some stuff that we don't like
I'd argue that there is already a (fairly) tried and tested process in place to deal with this, it's the legal system.
There are plenty of media outlets that publish stuff I don't particularly like, but almost none of it is illegal, so - to be blunt - I just have to suck it up.
Some of my friends have opinions that I - at times - violently disagree with, but I file that under one of the side effects of life, and I deal with it.
I'm rarely "outraged" by companies hosting stuff. If it's illegal, knock yourself out and get it taken down.
However if it's just really, really annoying or you find it against your own worldview, perhaps take a deep breath / drink a cup of tea* / go to the gym / hug your OH, and move on to something more important?
There are two more ways to deal with content we don't like. We've largely abandoned these to our great harm:
1) Get to know people in your community who hold differing opinions. We all need to be doing this more - fostering friendship over the things we have in common. This isn't easy, but the Western world used to be far better at it.
2) Engage in healthy debate, which means advocating a specific opinion in a public way -- either speech, column in your local newspaper, etc. -- with carefully researched references/sources, no ad-hominem attacks, assuming good faith and intent on the part of those who disagree, and respect for the differing opinions of others.
Imagine if every local community did (1) and (2) -- people would be a lot happier and would be less likely to hold unsupportable opinions, since even cursory research (i.e. prior to publicly arguing in favor of them) would show those opinions have no basis in reality.
The idea that these megaphone institutions can and must act as arbiters of what's "ok" speech and what's not is socially untenable.
The best marker of sweeping political radicalism is when no one is allowed to be neutral anymore--not even news outlets or public forums. When everything is political and everyone is forced out of political neutrality, we're in big trouble as a society.
"If you're not for us, you're against us" is an ominous statement in any context, and when that becomes a mainstream political cry, it signals that freedom itself is coming to an end (not to mention freedom of speech).
You could never walk into a private bar and demand the right to hand out Nazi propaganda, solely on the merits that "well, that's where all the people are!!"
As for neutrality, there are plenty of actively neutral companies, in action, today. They're just not very popular. Because they're filled with horrible people who demand the right to say horrible things. And no one wants to hang out with those people.
Now that the market has decided horrible beings aren't entitled to anyone else's space, the horrible human beings are insisting that the big mean bullies be forced, through threat of violence, to tolerate them.
That doesn't seem to square with the progress civilization has made over the last several centuries. We no longer torture animals, treat humans as property, believe in the 'evil eye', etc.
I think rather than "violently" you meant to say "vehemently." If not then ignore this comment. If so then you should probably edit as the two have important differences in meaning.
The legal system isn't great for this as it tends to listen to the one with the most expensive lawyers, especially in the US. And companies like Google have a lot of expensive lawyers.
> The other truth is we're all outraged when these companies host some stuff that we don't like ... and get upset when they don't host the stuff we do like.
Is this actually true? I only think certain fringe Twitter groups are mad that companies host controversial things.
There is a long history of people in the USA and elsewhere being mad that companies host certain things - pornography was illegal to distribute for decades because of such beliefs, and is still segregated from non-pornographic content, and shunned by all regular advertisers (you won't see Coca-Cola ads or Beats headphones on pornographic sites) for precisely this reason.
Complacence might be a better descriptor than mad/outraged. For example nobody really gets up in arms over companies choosing not to host (what they deem to be) pornography for example, and various bans of risqué content and the people who produce them from major platforms tend to get a lukewarm response from people who are otherwise vocal about free speech (see: the USA's FOSTA/SESTA)
> The other truth is we're all outraged when these companies host some stuff that we don't like
If by "we" you mean US politicians and mass media, then sure. But I'm not sure that's true for the general populace in the US, nor for other countries.
On another note - I don't like that they host things, at all. That is, I don't like that the entity which runs a search engine is also the one which hosts a large part of the videos available for free on the Internet. Or that a company making popular computing hardware like Apple is also the host and gatekeeper for mobile apps, podcasts etc.
Isn't US Civil Rights movement notable for its nonviolence overall? MLK emphasized asserting basic human rights, so that the violence of the state should be seen more clearly by contrast.
The US revolutionary movement might be a more clear example where violent action was decisive.
In a free society sites like liveleaks and wikileaks absolutely have a right to exist. As well as all the fringe conspiracy sites.
The problem with censorship by Facebook, Youtube, Twitter and Google is different. Here the government is putting pressure on the tech companies to censor content they don't like. Censor the bad people or risk antitrust action.
It's so gross and so clearly in violation of the first amendment. Even elected officials and professors are not exempted. It doesn't matter if you're elected by the people or if you're an expert in your field, if you say something that is considered 'misinformation' by the Ministry of Truth you get censored. It's outrageous, and if big tech doesn't change course we need to start building alternative platforms. But it might already be too late.
It doesn’t matter if competing platforms are built. Normal people don’t care one way or another, especially outside of how those other platforms are popularly characterized, so those other platforms will never take off. Only a minority of people are conscious of their liberties and subsequently any potential infringement to them.
As far as I can tell, no shift of the overton window begins with 'normal people', rather it always ends with 'normal people'. Any campaign of this nature is a long and sustained effort over months and years. One requiring that normal people precede them on the wave of change is a guarantee that one will never begin to move in the first place.
All of the major platforms of today will surely be replaced eventually. Snapchat and TikTok came seemingly out of nowhere in roughly the same FAANG environment as today.
This is the problem, isn't it? You seem to be posting an easy question to answer. But it's not, is it? Who is Google to judge what anyone would "consider violent or garbage?"
I don't remember electing them to control this aspect of life. And where does it stop? What is the line? Who is actually defining these things?
When a small group of people control the definition of "wrong think" then we're gonna have a problem regardless of which side of the argument those people are on.
While your question is innocent enough, I get the feeling you already knew the answer.
> This is the problem, isn't it? You seem to be posting an easy question to answer. But it's not, is it? Who is Google to judge what anyone would "consider violent or garbage?"
You aren't even touching on the complexities. The original article was about "misleading content". Google is asserting that they will take action on "content that deceives, misleads, or confuses users".
Speech is violence nowadays. Silence is also violence.
One of these days, I'm going to go live in the woods and no internet. People clearly do not want a free society anymore so I may as well just check out.
Or just migrate to a country that aligns more with your views, like many immigrants have done and do today. John Locke never said anything about needing form new ones.
The, "Won't somebody please think of my children" excuse is a little selfish considering there are people with differing opinions (who may or may not have children too).
would be great if fbk and twitter did the same thing!
any group like 'occupy democrats' 'vets against trump' - would be gone.
If they would do these to the search results, most of the news sites would not longer have top positions! I'm liking this now.
"content that deceives, misleads, or confuses users"
- funny that I used to ask people on fbook some years ago when they posted some things, 'do you believe the thing you are re-posting is true or fake? Is it funny or serious? Do you think your 'followers' think it's true when they see you post it?
Trying to determine the understanding of the re-poster - but also the 'intent' of them re-sharing - sadly I think most of the time it was to 'deceive' aka virtue signal tribe thing - even when they admitted things may not be true, they still wanted the thing posted and shared - and knowing others may not look at it and not know it's not true.
need to think on this longer. Wait, when g/f/t thought the hunter laptop was fake they affected our national elections and discourse, when they did not care if golden shower oppo research was true or fake it affected real world stuff.
Not so sure these folks can be trusted with deciding what should be shared as true/false actually. The reach and effects of these decisions are large and serious.
>Who is Google to judge what anyone would "consider violent or garbage?"
They own and operate a service called Google Drive. They offer that service under whatever terms they decide. And the decisions they make are relevant to their own service. They likely also don't allow you to use their service to distribute illegal material.
>I don't remember electing them to control this aspect of life. And where does it stop? What is the line? Who is actually defining these things?
Google has the right to make this decision on their own platform. They don't have the right to make this decision outside of their platform, and are not attempting to do so. They're not a government. They cannot control what other sites do. They don't have an army. They're not burning textbooks or jailing teachers. They're not controlling the definition of "wrong think."
If you don't like what they're doing, you're welcome not to use their service. Google Drive isn't the only cloud-based document backup service by a long shot.
> The other truth is we're all outraged when these companies host some stuff that we don't like ... and get upset when they don't host the stuff we do like.
Yet it is rational to oppose distribution of what you think is wrong and promote distribution of what you think is right. You must have a hidden premise in there somewhere.
Is it rational, though? Maybe for a certain type of politically active person. But I believe in free expression (the principle kind, in addition to the 1st amendment kind), so it seems to me it is not rational for me to oppose the distribution of anything legal. Or is it irrational of me to have principles, rather than being maximally self-interested or social-utopia-utilitarian?
The thing is you are "promoting distribution of what you think is right" literally right now. Like, with this very comment right here, you are being politically active!
So, assuming you are acting rationally, you are right now promoting what you think is "right" ("anything legal", "1st amendment" eg the United States's Constitution), while countering what you think is "wrong".
If you didn't believe in the promotion of what you think is right, then you wouldn't be posting to argue against what you think is wrong! You would never upvote (bias) or downvote (censorship) and so on. Sure, you could argue that your style of promotion (comments on HN), or that promoting your worldview in general is better for certain outcomes, but ultimately your still just arguing for "freedom" in your particular definition of "freedom" (still promoting or opposing distribution of right/wrong)
I think you are equivocating. I don’t oppose the distribution of any other opinion. I don’t like those opinions, but I’m not trying to make it harder for anyone to say them. And trying to change someone’s mind about it is not at all the same as “opposing distribution” regardless of whether it has the same intent or potential effect.
I'm not trying to equivocate here (or be combative, I hope this is an interesting discussion for both of us!). I'm being serious: I consider downvoting to be "opposing distribution" of a statement by definition, since it limits the distribution, although perhaps not very effective if done by yourself.
> regardless of whether it has the same intent or potential effect.
I disagree, and I think I'm in the majority to have more outcomes-based ethics [1]
What I'm trying to get across is that you are "politically active", whether you think you are or not. "Activism" can literally involve just a bunch of friends on an online platform upvoting and downvoting. Even just a small group of people doing this can even be effective censorship in certain contexts, such as local elections. Sure, Google may have more cost-effective means of censorship --- larger political campaigns have to pay firms LOADS to bury stories or control online discourse without access to the power Google has --- but it's still the same result, just a matter of who calls the shots and cost-efficacy.
You might argue that controlling discourse like this is not censorship or unethical based on your definition, but as you said it can have the same intent and has the same potential effect, so to another perspective, perhaps one that places less value on the USA constitution, it most certainly is.
How would my being mildly politically active on HN contradict my belief that it is rational for me to encourage the distribution of legal speech, without regard for agreement with the content? I feel like you’re trying to chide me by reminding me that I’m engaging in politics. Yes, I’m engaging in politics. I just don’t see what that has to do with the thrust of my objection.
Your point about downvotes being censorship is troubling and gives me pause. I think it’s only censorship because of how HN fades the comment towards illegibility. But I have always thought that is a user-hostile design choice. I’d much rather you could see the score, but still be able to read them easily. I have spent enormous amounts of time carefully reading people I disagree with.
(FWIW, I did not downvote you, and I do so rarely.)
> I’d much rather you could see the score, but still be able to read them easily.
I agree. Actually I'd almost forgotten they did that since I use the StyleBot extension and have ".commtext { color: black; }" in the CSS for this site, which overrides the fading. I also enabled the setting to show "dead" comments; they aren't always worth reading, but it happens more often than you might think.
Actually that's another aspect in its own right—HN doesn't just fade downvoted comments but removes them altogether if they're downvoted enough. And sometimes comments are actually deleted by the administrators and not just marked as "dead" and hidden from the page by default. It's their site, and I would uphold their right to not be forced to host comments they object to against their will, but there is some actual censorship going on and not just convenient "curation" of what shows up clearly in the default view.
I don't see downvotes as censorship when the downvoted comment is still available for those who care to read it, and not actually deleted. To me it's more of an indication that, in the reader's opinion, the comment was perceived as not contributing to the discussion. In general I prefer to upvote the good comments and save the downvotes for trolling, flamebait, etc. which would be likely to derail the thread. To put it in words, an upvote is like saying "check this out" while a downvote is either "don't waste your time" or "this thread belongs somewhere else". But you can still see the downvoted comments if you want.
> How would my being mildly politically active on HN contradict my belief that it is rational for me to encourage the distribution of legal speech, without regard for agreement with the content? [...] I just don’t see what that has to do with the thrust of my objection.
Yeah I'm not really making my point clear here, sorry.
My point is that a "values-neutral" platform doesn't exist, and every attempt to build such a thing is usually only "values-neutral" from the perspective of it's creator. For example, I'd argue there's a contradiction even in the way you phrased it here: "Legal speech" implies you are indeed giving "regard for agreement with the content", since this would imply suppressing content that is not in agreement with some legal framework you have in mind.
> I think it’s only censorship because of how HN fades the comment towards illegibility.
It's not just that. Upvotes promotes one position over the other, so when one considers statistical properties of how far people scroll down, or how likely people are to expand low-voted comments or go to another page (for platforms like Reddit, HN, etc), the effect can be the same.
It's interesting to see that as online platforms gradually replace "traditional" journalism for how people get information, we're rehashing a some of the same old arguments about what is "objective" journalism. Publishing ANYTHING, whether physical documents (eg newspapers) or HTML documents (eg HN, Facebook), will always promote some worldview and censor another based on what is included in the publication, and the ordering of the topics.
Sometimes this censorship is explicit (eg nixing a story, Google taking down a search result), other times it's done statistically (eg putting stuff "below the fold", a search result being on page 10), but our informational world is perpetually being shaped like this. Pretending that's it's even logically possible have unbiased platforms "without regard for agreement with the content" --- as an example, not you, but elsewhere here it was claimed 2010-2016 was mostly censorship-free --- is starting off on a wrong premise. If we start on a wrong premise, any further discussion is meaningless at best, and actively manipulative at worse (Fox News' "Fair and Balanced" slogan comes to mind)
> (FWIW, I did not downvote you, and I do so rarely.)
It's rational only if you don't think those powers will ever be turned against you. Once you realize what you're creating is a mechanism to censor rather than a particular instance of censorship, self interest should force you to realize that the censorship mechanism that you're supporting can be turned against you.
That is rational, yes. I think pretty much everyone thinks this way, with differing definitions of both "promote" and "oppose", and "wrong" and "right".
Though I may be misunderstanding what you are getting to here.
I think most religion is wrong, yet I'm not opposing that. I also think genocide is wrong, and yes, I'll oppose that. Same with anti-vax rhetoric.
While religion can be a real negative, it isn't generally the goal and often, there are good intentions. Genocide hurts people, though, through its nature. Anti-vaccine propaganda hurts folks as well. You simply cannot have these movements without hurting folks.
i-robot protect us all with truth! (except hunter laptop, and lab leak theory - hide those haha)
Actually - I am kind of okay with this new kindergarten gloves way of treating the people - let's give them the sharing a ability they deserve. California knows best what's good for everyone - just don't talk bad about beef. Well you may have to censor that in some other parts of the world.
Different kindergarten for different countries? different states?
Think how much better and safer this internet world is going to be without all these bad things!
I think another error is assuming that having all content within a few hyper-scale hyper-global ad-supported commercial repositories of everything is a natural or healthy state of affairs. Many small websites dedicated to particular things is IMO generally better both from a free speech and a moderation standpoint than these giants that have to thread an impossible needle. In other words, web 1.0 was better.
They were, just no longer the same under Tim Cook.
I dont want to derail the discussion into another political debate but my thesis, is that some ideology spread like plague in Silicon Valley. The Good vs Evil. As the OP said Google stated off being good, but somewhere along the line the definition of Good got twisted a little bit. They keep thinking they were so righteous they literally started a crusade or witch-hunt ( so to speak ).
And it is in some way interesting because it rhymes with many historical events.
“Power tends to corrupt, and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority, still more when you superadd the tendency or the certainty of corruption by authority. There is no worse heresy than that the office sanctifies the holder of it.”
> They were, just no longer the same under Tim Cook.
Steve "we have a moral responsibility to keep porn off the iPhone" Jobs liked making devices for everyone but definitely not for everything they might want to do on them.
What makes you think the government is that Messiah? Is it likely, in your view, that the government will go out of their way to "encourage the spread of misinformation"? I'm not seeing that happening.
In the EU, governments and actually trying very hard to discourage the spread of misinformation, as well as passing legislation that the tech sector has always claimed was not needed. So yes, it's very likely, as it's already happening.
I mean, I don’t really think this holds up. I have personally dropped all Google products I use in favor of privacy-focused alternatives. For example, GA>Fathom, Gmail>Protonmail, etc. So I’m sure privacy-focused competitors will pop up soon, or perhaps they already exist and I simply don’t know of them because I’m not a big user of the OP’s listed Goog products.
The market has most certainly spoken: we prefer privacy, and we no longer want to be a product. Competition will come to fill those needs. No need for more government regulation. The free market works.
This laughable arrogance. The market has most certainly spoken but not in the way you and your microcosm think it has. The market doesn't give a fuck about privacy. They want convenience and want to pay as little as they for it. Privacy might seem to be on people's tongues at this moment and it will continue to be still, but only a core few will actually make decisions that inconvenience them in order to achieve it. Only a few million people are leaving services like WhatApp en-masse. 99% of their customers do not give a fuck. They aren't doing drug deals (well some are), they aren't dissidents, they aren't terrorists, they're just every day people going about their business with likely zero real repercussions other than finding it a bit creepy and continuing to scroll. There's probably more people leaving because things have gotten a bit boring and uncool than people taking a principled stand for privacy.
We are, at least currently, outliers. Do not forget that.
I’m referring to the segment of the market I’m in. I didn’t mean ‘the market’ as in 100% market share. I should have said “the market is speaking.” No need to come off as rude.
That, and Google has ~ 20 years of history where they were perfectly fine and they've really only started to act up recently in ways that are, at worst, objectionable and inconvenient without being particularly harmful.
The risks of letting market competition sort it out are much smaller than the risks of making it a political football. As a political topic it is basically going to be two parties fighting over which set of lies get to be true. That won't be better.
I'm really sorry for the off-topic, but I've been waiting for a chance to ask because I see it here very, very often. What are people intending to denote with the "private" in "private company", especially where the company is so obviously and well-known to be public[ally traded]? What are you trying to distinguish it from by calling it private? Are you just pointing out it's not the government or quasi-governmental? Are you emphasizing that it has its own prerogatives?
Sorry, this has just been becoming a peeve for me on this site. I just want to know what you're trying to express by calling a company that is not privately owned, "private". Thanks.
The term "private company" has no relation to whether or not it is traded publicly. It simply means the company is not owned by or controlled by (in some sense) the government, but by "private" individuals
I guessed. I still don't understand why it is necessary to differentiate a category with 1.7 million members from a category with 17 members. Nobody here is ever, ever, ever (almost) discussing government corporations. I don't understand why people don't just say the 1st amendment doesn't apply to corporations. Everyone would know what they mean. Adding "private" suggests they are distinguishing it from something else. And maybe they are, but it always sounds like they're either misspeaking, confused, or repeating a meme.
Aren't most universities public companies even in USA? I know some of them are private, but there must be way more than 17 public ones. Add all the public schools, police departments etc, and you get quite a lot of them.
The key is in your last paragraph: the companies in question are privately owned in the sense of "private property". In the context of (American) public policy, due to factors such as the First Amendment, the distinction of a company being governmental/public vs non-governmental/private is much more likely to be relevant than it being publicly/privately traded.
My point is 0.0000006% of companies are governmental. In the context of the discussion, nobody is even thinking of those. So, “private” as a qualifier is hot air. It sounds ignorant to me.
Ignorant of what? Publicly (as in government) owned base infrastructure companies (like telecoms, to which Google et. al. are pretty similar in these ethics cases) are not rare in the world at all.
I feel the exact same way. I suspect "Private Company" is being used in relation to Public/Private Sector and not in relation to whether or not the company is publicly traded.
I feel like your sense of it might be right. But even then, you can count the number of public sector "companies" (like Fannie and Freddie) on one hand, so it just strikes me as such a bizarre distinction.
It's not a bizarre distinction. The discussion is about free speech, and a common talking point has to do with comparing the actions of companies to the protections laid out for free speech in the American first amendment.
However, the first amendment is a restriction on government, not a restriction on private individuals or corporations.
Private equity groups are private. My uncle’s construction company is private. Google isn’t private in any sense that isn’t confusing. Why not just say that the 1st amendment doesn’t apply to corporations? That seems clear to me.
And as an aside, how many of us really need to be reminded, several times per day, of the scope of the 1st amendment? Really? Isn’t it more likely it’s a tired debate stopper?
>"And as an aside, how many of us really need to be reminded, several times per day, of the scope of the 1st amendment? Really? Isn’t it more likely it’s a tired debate stopper? "
My feelings exactly. It doesn't accomplish anything and literally adds nothing to the debate. Does the Eighth Amendment's prohibition on excessive fines imposed and cruel and unusual punishment only apply to the government too? Clearly, we can talk about the spirit of the Bill of Rights and apply that to things that aren't literally the government.
> It doesn't accomplish anything and literally adds nothing to the debate.
It's important because to many people (myself included) the spirit of the freedom of speech referred to in the 1st Amendment is that force (i.e. the exercise of government power) is not an appropriate or legitimate response to speech. Censorship in the broad sense involving hosting decisions by these organizations owned and run by private individuals is not a violation of the freedom of speech, because it does not involve the use of force.
> Does the Eighth Amendment's prohibition on excessive fines imposed and cruel and unusual punishment only apply to the government too?
Essentially, yes. The general principle applies to any organization which would take it upon itself to impose fines or punishment… which basically means the government, because the government doesn't let anyone else do that in its territory. For any organization other than the government this rule would go without saying, but since the government claims to be able to impose fines and punishment according to its own rules, unfettered by the rules of proportional response and natural law, this limitation must be made explicit.
>And as an aside, how many of us really need to be reminded, several times per day, of the scope of the 1st amendment? Really?
Yes, that's why people are pre-emptively raising the status of the corporations in question as private - to head off the discussion you're tired of hearing.
Despite that, instead of discussing what the proper ambit of content review should exist, 80% of the thread is still debating whether or not editorial control should exist in the first place; the exact same type of boring, rehashed discussion that adds nothing of value.
Now we're here having a meta discussion about the discussion that ALSO adds nothing of value, so it looks like no matter where we go there's no shortage of ink that leads nowhere :(.
The reason a distinction is drawn is because government has more rules it must follow when interacting with the public than google. Non-governmental entities in the united states are less regulated than the government.
I would much rather live with censorship that can be removed by withholding dollars vs one that requires votes.
Just look at the puritanical rules concerning language and sexuality forced on broadcasters. Those rules are decades old and outdated and will likely remain forever.
When companies fuck up on censorship, the results seem to last only a few years. When governments fuck up, the consequences echo for multiple lifetimes.
Private my ass. I understand the terminology, but they are part of the defacto public domain, and are a publicly owned company. They built their company utilizing the resources and technological prowess of the United States, then piss in its face to cater to woke leftist silicon valley politics (which doesn't represent 90%+ of America).
If that MIT study about the end of the world turns out to be true, THIS type of woke bullshit will be the cause. And when society falls, I have a feeling who's gonna be gunned after first.
What do you suggest? the government takes over Google? Prevents them from controlling what they store on their servers?
If society fails (which is little more than a prepper wet dream) everyone's going to go gunning for whoever has food and fuel. No one will care about Google or political parties.
I'd suggest we protect freedom of speech similar to how we protect civil rights.
Before civil rights, "private businesses" used the same exact excuse as big tech while refusing customers based on race.
In the current setup, big tech thrives risk-free via governmental protections separating them from their customers (or what their customers post on the platform). The underlying concept of social media has become a public utility, and should be protected as such. Once big tech began altering visibility of posts based on politics (in any way/shape/form), they crossed the Rubicon.
I agree with everything except "I think your fundamental error is in the fact that you think" - I don't think this, and I don't think my post implied that I think that either.
To clarify my statement: "We should be able to implement services like these, that are free of ads, on globally distributed infrastructure, with no central authority, to have truly free-flowing information." - I don't think that the structure provided by a company is sufficient to do this properly. Rather than companies/governments, we need protocols and standards. For instance, what if we had a decentralized app (dapp) built on something like cardano, that allowed one to edit docs that lived on an IPFS? We might have to sacrifice some micro-conveniences (e.g. google docs saves automatically for you as often as every few keystrokes) to make it tenable on something like a blockchain, but it seems feasible.
Honestly a lot of folks should assess the value that big tech provides. We survived the 90’s running MSDOS backing stuff up to disks. And storage is cheap. Cloud this-and-that is great in theory but in practice it makes things too complicated. For instance, whenever I use a cloud-centric application I’m thinking okay… so where’s my files? And the answer is who knows!
More specifically I used to use OneNote when it used the local file system and I could get to my data. Then MSFT puts everything behind a cryptic ten-layer hashed URI…
The tech industry is like an insurance company now selling people on fear of losing files or productivity—but I’d wager more often than not technology gets in the way of folks productivity. By technology I mean fluffy clouds because obviously tech CAN help.
We need a better balance and a lot of that starts by keeping it simple.
> What happened to having specialized agencies regulate and inspect industries?
A lot of censorship has traditionally been governments threatening to regulate industries if they don't self regulate, and that's pretty much what's been going on.
we could get a lot closer if markets were well-regulated for fairness and competitiveness. instead, we get all sorts of distortions, some well-meaning but most not. one sign of a good market dynamic is a lot of medium-sized companies, rather than very large or very small, because that means the companies have taken advantage of economies of scale but no one firm has outsized advantage, so all must still compete.
to that end, google, facebook, et al. should each probably be split up into dozens of companies to start, with stong privacy, portability and interoperability standards/mandates at minimum.
Heard? By whom? Monopoly-capitalism directly benefits rulers, complaining to the very people who have incentives to keep the system as it is will not change anything. You and your fellow citizens have to actually do something.
Government regulation is no substitute for competition. The ability for consumers to walk away to a suitable alternative maintains a continual accountability that a single agency is unlikely to deliver.
This actually highlights the point well I think. The cost of walking away from Facebook is too high for most people even though they know its' bad for them.
Take myself for example. I only ever get angry when I browse my facebook feed and for some reason I've actually taken steps to ensure I get angry when I browse my own feed (chalk it up to silicon magic). I would like to leave facebook, but if I do I'd lose messenger which is the easiest and most consistent way for me to keep in contact with dozens of people (who mostly don't make me angry).
If facebook protocols were open source, then by now I would likely have dozens of different options to take my friend's list to a messenger only app that does not include a feed for me to angrily browse.
Lowering the cost to leave is a benefit to everyone since it reduces the reach of that anger inducing feed.
Isn't the Messenger app a "messenger only app that does not include a feed for me to angrily browse."? They also seem to have standalone web and desktop versions at messenger.com
We still need there to be suitable alternatives, so there needs to be some regulation at least, to encourage competition and/or to prevent companies from reaching a point where there are no longer alternatives.
There is a category of companies that doesn't try to be a content arbiter. Common carriers, utilities, other services that have an obligation to contract.
Of course the "contract" with SNPs is that you "pay" with your eyeballs falling on adverts, and the ads in turn don't want to be anything like common carriers. So turning any web service into something like a common carrier would have interesting knock-on effects.
I think your fundamental error is the fact that you think that a regulatory agency can fix these issues. It seems that many people on HN are just waiting for the new Savior government body, that will magically do something ethical and correct for all of the citizens, and not just some politicians like every other government body in history.
Did it occur to you that maybe the GP wasn’t suggesting a company solve the problem and was hinting at open source + open protocol?
1. The GP said nothing about any savior company. They said "We should be able to implement services like these, that are free of ads, on globally distributed infrastructure, with no central authority".
2. You seem to be saying here that you want a specialized agency to do censorship right, as one centralized decision maker. Am I misunderstanding? A Ministry of Truth?
I'd say that this decision by Google stems from regulators and public opnion bashing against misinformation.
Private companies follow the wind of the policies that are there to coerce them. In this case, the policy that is being put in place is that misinformation is dangerous and should be countered.
In 5 years, misinformation could be relegated to Tor networks, where it belongs.
> It's like hoping for market competition create health regulation in the food industry.
Why is it like that and not like gas stations not allowing smoking so their property doesn't go up in flames? It's really strange to think market forces must always be against consumer, there can be mutual benefit if the right incentives are put in place.
There are absolutely lots of regulations around fire near gas stations, why did you think that gas station owners made those rules? They have to follow them or they will lose their permit and get shut down.
The fundamental issue is companies dealing with out data when they should just be making hardware or software.
In the old days, computer companies made the hardware and government institutions used that to run the internet. This is how it should be. Companies should not be touching our data directly. They simply cannot handle the responsibility.
I would much rather have a even just a handful of private companies be the regulators rather than a single entity.
If our founding fathers had one guiding principle, it was to distribute power as much as possible as power has a strong tendency to concentrate and corrupt.
I often wonder whether the solution is setting the expectation with companies that if they act in bad faith, their leadership will be om nom nommed? Apologies for the metaphor, it's the best I could come up with on short notice.
As people remind us all the time, it’s not a violation of the right to freedom of expression if a private company stops letting you use their services. It is a violation of that right if the government does so.
competition definitely can make things better. But we aren’t seeing enough competition and these companies are engaging in anti-competetive behavior without much consequences.
executive agencies should inspect and enforce, but not regulate. That is the legislature's job and they cannot felegate it beyond "implementation details". We have been too lenient with this so we get the FCC, FDA, EPA, ATF etc flipflpping on what amount to laws instead of details with every change of administration. That isn't how its supposed to work. Congress decides laws, no one else.
Well by all means regulate, but realistically, for example, if we had a Google Drive run by the 'other side' of the political aisle (kind of like Gab v Twitter), then if Google banned certain content, it's unlikely the other would, and vice versa.
Unfortunately, the issue here is companies responding to something other than the market.
The issue is that we know from experience, after 20+ years of the modern Internet, that if you make a 'free speech' drive/repository place that's widely available, it will host the absolute worst of the human race. Then, let's say you personally were in charge of said Free Speech Drive- every day you'd get up and hear about people using it for (legal) jailbait photos, Islamic State recruiting, collaboration between extremist militia groups in various countries (including your own), actual illegal content, and so on. Pretty soon the FBI & CIA start contacting you about some of the actual or borderline illegal content being hosted on Free Speech Drive. Do you want to deal with that?
For one thing, it's easy to say 'well we'd only take down illegal content'. But in practice there isn't such a bright line, there's lots of borderline stuff, authorities could rule something posted on your site illegal after the fact- lots of these situations are up to a prosecutor's judgement call. Would you risk jail to push the boundaries? Coordinating 1/6 wasn't necessarily illegal until- it was.
If Islamic State is recruiting on Free Speech Drive, posting manifestos, encouraging Western residents to actual jihad- you wouldn't take that down? You'd leave it up if it hewed up to the line of being legal- really? Jailbait or non-nude pics of someone's teenage daughter, hosted in the thousands- you wouldn't take that down? It's easy to be an absolutist in an Internet argument, it's much harder when you face the sort of everyday content moderation issues you see in the real world
Another wrinkle in all of this is that you can use free speech as a form of censorship.
For example, if someone says something you don't like, you can intimidate them into shutting up by, say, threatening to reveal their personal information, such as their legal identity, address of residence, and so on. On certain corners of the Internet, merely dropping dox is good enough to get randos (who won't even be affiliated with you, so +1 to plausible deniability) to harass someone you want to shut up.
A more technical variant of this is DDoS attacks. Instead of trying to intimidate someone into shutting up with threats of stochastic terrorism; you shout over them by sending a bunch of traffic to their site until the server crashes or they run out of money.
So even if you're a hardcore free speech extremist, you still need to embrace some level of "censoring the censors" if you want the Internet to actually be usable.
Agreed. That's not even getting into just pure spam, which from people like Alex Stamos I've heard is 100-1000x the issue that culture war content moderation is. Once you've accepted that a platform can remove the kind of spam that killed MySpace- or doxing or a DDoS attack, as you say- you're already on the (common sense IMO) road to content moderation. Which again, from 25+ years of the modern Internet, we know is just mandatory to have a useable site
That's not censorship though. Threats are what people are forced to do when they cannot censor you, as censorship is much more direct. And DDoS attacks aren't speech.
You've seemingly distinguished DDoS attacks from legitimate traffic. If someone is "flooding the zone" with disinformation with the purpose of making it impossible to discern the truth (i.e. it's not legitimate, good faith discourse), is it not reasonable to draw a parallel with DDoSing?
DDoS traffic doesn't contain any form of "speech" and cannot lead to any. If you insist on drawing dodgy analogies, the correct parallel would be someone taking a truck to a political rally and then playing incredibly loud white noise at volumes that prevent people hearing each other.
As for the idea that people are deliberately flooding the zone with disinformation, I'm afraid I've only seen that coming from the sort of people who use it as an excuse to engage in censorship. There is certainly a massive disinformation problem, but it's not the one they mean when they say that. Consider the experience of this guy, who just "woke up" to the fact that the BBC has been manipulating him:
I myself watched a deceptive BBC news report last year in which they presented a social worker as a "dental specialist", a man whose brother died of COVID except the report admitted the cause of death was unconfirmed, and supposedly flooded ECMO unit that turned out to have spare beds available.
> threatening to reveal their personal information, such as their legal identity
What's the problem with that? Bad things on the internet happen more often than not because of the lack of responsibility.
Doxxing has become the primary sin in the Internet religion but it would solve all kind of problems. I am going to commit that sin and say that Doxxing is the solution, you can downvote me and make my comment greyed out and censor me when you argue against censorship.
Instead of deleting content, simply make sure that it's linked to someone who can pay for it if it turn out to be something to be payed for.
The Anonymity argument is only good when you are actively persecuted by a state actor. I don't agree that you deserve anonymity because the public will demonise you. If you hold strong believes that can be met harshly by the general public, you better be ready for the pushback and think of ways to make it accepted. That's how it has been done since ever.
Therefore, when a content is questionable maybe the users should be simply KYC'ed en left alone until a legal take down order is issued. If its illegal(like illegal porn, copyrighted content, terroristic activities etc), go to prison for it. If its BS get your reputation tarnished.
Who the hell gets to judge what is 'to be payed for' in this world you're talking about? The mob? No thanks!
In fact the internet actually went to shit the minute it pivoted from 'Dont share your personal info publicly' to 'please give us every last drop of your personal information and share it publicly'
>Who the hell gets to judge what is 'to be payed for' in this world you're talking about
Those who demand the payment, obviously.
Denying the existence of the god or being gay could be something to be payed for in some places and obviously that is horrible thing but anonymity doesn't solve that.
Fighting for a change or leaving that place solves something. Alan Turing himself was subjected to these things in the United Kingdom. A few decades later things changed in the UK and had nothing to do with the anonymity.
Now those who think that gays deserve equal rights demand payment. Again, anonymity it's not helping the anti-gay folks but simply creates low quality discussion and stress and nothing more.
Stopped people getting doxxed, hunted down a lynched in countries where you're executed for being a homosexual, while still being able to communicate safely online with other people in the same situation?
How exactly does anonymity not solve that problem?
> If you hold strong believes that can be met harshly by the general public, you better be ready for the pushback and think of ways to make it accepted. That's how it has been done since ever.
The problem is that we're not talking about the general public. Let's say I'm Jewish. Someone on the internet may "doxx" me by finding a group of neonazis and spreading my information there, resulting in me getting threats and hate.
The internet seems to specialize in this sort of "doxxing". Why? My theory is that the internet, even if you have a real name for a handle, still distances and dehumanizes others to the point where it's hard to understand the pain you're causing.
It's hard to walk up and slap someone because you feel the slap and see them wince in pain. It's easy to DM someone on twitter something far more hurtful than a slap, laugh about it with your tribe of neonazis, and forget about it the next day.
> The internet seems to specialize in this sort of "doxxing". Why? My theory is that the internet, even if you have a real name for a handle, still distances and dehumanizes others to the point where it's hard to understand the pain you're causing.
That's a good theory. My personal theory is there are a lot of psychopaths and sociopaths in the world, and the internet lets them find each other and form communities that revel in causing misery.
I think the solution of your problem is physically securing you against neo nazis instead of hiding your identity. Unless of corse you are writing this from the 1940's and you are in Central Europe. If that's the case, you have a case.
I am writing this on the internet. Physically securing myself does nothing to prevent hatemail, DoSs, and slander.
The ideal that "lies can't hurt you, the truth is stronger than lies" has never seemed to actually work. There are countless fictions are far more prominent than facts, and there are countless people who's online experience has been damaged by a small contingent of dedicated attackers.
The response to "harboring free speech to the extreme results in neonazis digitally harassing Jews" should not be "okay, fine, lock your door at night, free speech is more important than you being harassed".
>If people believe that it’s wrong for you to be subjected to that, those who do this to you will pay for it.
Are you referring to vigilante justice or just trusting the system? (What Americans refer to as the "democratic process" and the "justice system")
Because if it's the former, it takes a lot for people to raise a hand against others outside self-defense.
If it's the latter...the system usually fails. And saying "yeah well eventually, once there's enough political pressure, it won't fail" isn't any consolation to those who are now having to spend thousands of dollars on a therapist to recover from the trauma they've suffered (because American health insurance typically has crap mental health coverage).
>People are much nicer when their reputation is at stake.
I think a more accurate statement would be "People are much nicer when their money is at stake." And I don't mean "nicer" as in "genuinely better," it's more "I'll paint on a smile and not say anything bad" (just ask a waiter and they'll have plenty of tales where they had to do this for a tip).
Today, there are plenty of online hangouts for people with all sorts of ideas. This is great since people can easily form communities around a TV show, hobby, etc, but it also enables flat earthers and anti-vaxxers, whose views are often rooted in bigotry (see: All Gas No Brakes video on the flat earth convention [0]). Those communities tend to encourage an "us vs them" mentality and to cut out those who seek to "hold them back" (basically modern cults - alienate yourself from your friends and loved ones, we will provide all the community you need).
In the past, joining groups like the Klan was a much more difficult endeavor (they tended to operate much more in the shadows), and the groups tended to be on the smaller side. Today, it's just a matter of joining a Facebook group about "race realism", "the truth about George Floyd", or whatever, and bam, you have access to thousands of like-minded individuals to build your own personal echo chamber. The traditional tactics of people avoiding those they dislike IRL don't work so well as a form of collective shaming when you've got someone who is terminally online and has tons of people to tell them just how right they are and linking various garbage to reinforce the worldview.
See, I deeply dislike my speech moderated. Here on HN I am not allowed to advocate certain opinions like "US was wrong on even attempting to ban TikTok", I already got in trouble two times for it and I hope this wont be the 3rd one(I am supposed to pick my words carefully so not to inflict strong feelings, or my account gets rate limited or comments hidden). Also, whenever I express an unpopular opinion or controversial proposition(like the one in this thread) I would get my writings grayed out or collapsed instead of rebutted. That is a censorship by a community. When it comes to the online communities, they are often heavily moderated to push certain agendas which creates bubbles.
Free speech is non existent these days. The places that had were invaded by trolls and dabd actors, go shut down one by one after each incident that cost outrage.
My proposition attempts to solve these issues. Don't censor, never delete or ban anything or anyone unless legally required to do so(copyrighted content or illegal porn), hold responsible instead.
I recognise that there's value of anonymity, what I propose is to limit it to the occasions when there's a value.
Oh BTW, if when I say non-anonymous I don't necessarily mean a connection to the government issued legal identity. Using a pseudonym that is the same everywhere but not connected to a government recognised identity should be good enough most of the times. Throwaway accounts are fine when relevant. One person pushing an agenda through multiple accounts is not fine. There can be mechanisms to allow anonymous posting attached to a real identity where doxxing is an option when the person is determined to be a bad actor.
> That if you make a 'free speech' drive/repository place that's widely available, it will host the absolute worst of the human race.
That's only due to selection effects. If being open were the default then they'd be diluted among all the other people. ISPs themselves, (older) reddit, 4chan all serve as examples that the people you don't want to talk to can be mostly siloed off to some corner and you can have your own corner where you can have fun. Things only get problematic once you add amplification mechanisms like twitter and facebook feeds or reddit's frontpage.
> For one thing, it's easy to say 'well we'd only take down illegal content'. But in practice there isn't such a bright line, there's lots of borderline stuff, authorities could rule something posted on your site illegal after the fact- lots of these situations are up to a prosecutor's judgement call. Would you risk jail to push the boundaries?
I don't see how that's an issue? They send a court order, you take down the content is a perfectly reasonable default procedure. For some categories of content there already exist specific laws which require takedown on notification without a court order, which exactly depends on jurisdiction of course, in most places that would be at least copyright takedowns and child porn.
> Pretty soon the FBI & CIA start contacting you about some of the actual or borderline illegal content being hosted on Free Speech Drive. Do you want to deal with that?
That's pretty much what telcos have to deal with for example. Supposedly 4chan also gets requests from the FBI every now and then. It may be a nuisance, but not some insurmountable obstacle. For big players this shouldn't be an issue and smaller ones will fly under the radar most of the time anyway.
Also, having stricter policies doesn't make those problems go away. People will still post illegal content, but now in addition to dealing with the FBI you also need to deal with moderation policies, psychiatrists for your traumatized moderators (which you're making see that content) and endusers complaining about your policy covering X but not Y or your policy being inconsistently enforced or whatever.
>ISPs themselves, (older) reddit, 4chan all serve as examples that the people you don't want to talk to can be mostly siloed off to some corner and you can have your own corner where you can have fun. Things only get problematic once you add amplification mechanisms like twitter and facebook feeds or reddit's frontpage.
This isn't true at all, and the reddit report following their ban wave is pretty clear about it; once areas that actively established a standard of violent or racist discourse as acceptable were banned, the volume of objectionable material across the site dropped.
4ch had a similar situation, where the culture on /b/, which was intentionally left as an explicitly unmoderated segment of the site, a silo, actively invaded other boards with violent, racist content.
It isn't that people sit in silos and do nothing otherwise - it's that the silos themselves cause people to believe their content is acceptable, then spread that shit everywhere.
I wrote "mostly siloed" not "perfectly siloed". This is no different from real life where your social sphere is not perfectly insulated from other social spheres. Perfectly siloed also means filter bubbles.
I think this is a really good point, and I think that if anyone is really committed to promoting free-speech-maximalist approach to the web they should be focused on building tools that make is easier for people to host and distribute their own content without relying on a centralized service.
Any business with the technical ability to censor what they host is going to be tempted (and likely pressured by other actors) to take down content that people find objectionable. Removing these "chokepoints" where a small number of people have the ability to engage in mass censorship is key if you want to promote more diverse speech on the web. (Not everyone has this goal!)
a lot of people who say they want an absolute free speech drive/free speech host have never actually worked for a colocation/dedicated server/hosting ISP and seen how the sausage is made.
It's a real problem. It's easier to suppress such content, but the problem is, it just goes elsewhere where it is almost completely unchecked, and it just proliferates in much darker circles as a result, and we have even less exposure as to its true volume.
Maybe there should be more of an effort to reduce peoples' incentive to engage in that sort of behavior in the first place. Why do people join violent extremist groups? Why do people engage with CP? Why do terrorist groups exist? Is it just human nature? Is it a fact that with 7+ billion people we are destined to have millions of people engage in this behavior?
De-platforming horrible material is better than nothing, but it feels like whack-a-mole
No I wouldn't. Google is not necessarily wrong here. The issue is that you cannot easily 'own' a part of the internet, despite much of our life playing out there.
In the real world, if no one wants to host you and your group, the standard answer is to acquire money and buy your own land, your own broadcasting, etc. On the internet, this is much harder for 'normal' people to do, requiring them to use services like Google.
Look, once parler was taken down, I purchased several servers off ebay, rented colocation space, and set up my own services. But I have the technical know-how to actually 'own' part of the internet without depending on anyone else. Most people can't really do this. Thus they depend on Google, et al, who are not selling them something akin to a land title, which is what people feel they ought to have, but rather a service.
The reason people got Mad at AWS for taking down parler is because to the common man's mind, when Parler pays for its "website" (because let's be honest, that's as deep as most people go), it 'owns' it, and it ought to be able to hold title to that thing in perpetuity, like land. People felt upset because they felt that amazon simply seized what they perceived to be the equivalent of land or personal property.
Of course websites are different because they require active serving by a computer, and parler was paying AWS to do that and Amazon decided not to. But that's not how people view it.
To top it off, people are scared because they don't know how to own anything on the internet. Even tech savvy people have no idea how to purchase or lease IP space, set up servers, routes, etc. It's all very confusing.
> once parler was taken down, I purchased several servers off ebay, rented colocation space, and set up my own services. But I have the technical know-how to actually 'own' part of the internet without depending on anyone else.
You said you rented colocation space. You (I assume) are paying a monthly fee for an Internet connection for your servers. You are absolutely depending on others who can be pressured just like AWS was with Parler. Don't kid yourself.
Sure, but the difference is that... if that happens, I own the computers. They can take down my internet connection (although I have multiple colocation centers owned by different people... I guess I could go international if I really want to add extra redundancy), but the computer is mine. The data on the drive is mine. They cannot touch this stuff. If they did, I can accuse them of larceny, and sue for damages.
It's written in the contract that they can take me offline, but they cannot touch my stuff. They can take it off the shelf for non-payment, but there's a period in which they have to retain it and offer it for pick-up.
This is wholly different than Amazon not only taking parler down, but also deleting the data, forcing them to download terabytes in three days, over a weekend.
And you're still right though. i don't actually own any IP space. In fact, IP space 'ownership' is handled by an NGO with little regulation. That's terrible. It ought to be governmental, because owning parts of the internet is an extremely important part of society. Too important to be left in any non-governmental organization's hands.
There are blockchain like systems that could solve this problem in a distributed fashion. There's also urbit. Or we could have proper governmental authority.
> In the real world, if no one wants to host you and your group, the standard answer is to acquire money and buy your own land, your own broadcasting, etc. On the internet, this is much harder for 'normal' people to do, requiring them to use services like Google.
If anything, it's easier since you can own a small plot of land and host a few servers in a building you throw together, instead of having to feed a bunch of people, buy large tracts, build a temple, etc.
Get a metro-e connection, and it doesn't matter how repulsive your legal content is - this isn't some residential connection bringing in <$100/month, they won't drop you until you become a liability (which is basically when the FBI comes knocking) because they make so much money on dedicated lines. (Plus, there's early termination clauses, and the last thing Comcast is going to do is pay that to appease a Twitter mob - what are you gonna do, switch ISPs? This is America, if you're using Comcast, it's probably because the next best thing is DSL.)
But it's not hard. Look at the mormon exodus. Look at how minority religions have pooled money to buy large tracts of land and large temples. It's fairly easy for a group of people to do this.
>The issue is that we know from experience, after 20+ years of the modern Internet, that if you make a 'free speech' drive/repository place that's widely available, it will host the absolute worst of the human race.
And that's just fine. People have the right to be assholes.
And it's your right to think that. But it's my right to think that containing the degree that people can be assholes makes for a better internet and a better world.
In the physical world, if people act badly enough, there will tend to be physical consequences. If someone goes up to a grandmother or little girl or whoever, and starts berating them with vile or threatening speech, they do so at the risk of finding themselves bloodied or worse. Everyone knows this can happen, and unless they are mentally ill, tend to learn at an early age to curtail such behavior behavior before something bad happens to them.
All this has had thousands of years to evolve to the current state, where in most places (at least in the first world), interacting with people in public places is generally pleasant and non-confrontational.
But online, it isn't that way. When there are zero repercussions, things become very unpleasant for all but... well, those assholes.
I get that you prefer such a world, but I don't. How do we work that out? Do we all adopt your way, simply because.... I dunno, I guess because that's what you want?
Oddly enough, there is not a strong business case in allowing the dregs of society to post their garbage all over your servers.
If Twitter adopted 4chan policies, it would be destroyed financially in short order.
Twitter is pretty loose compared to many of the big social sites. They allow nudity and porn, for example, but of course have rules such as marking it as sensitive. Try that on FB.
It is almost like they adopt polices that help their business. Just like brick and mortar businesses have policies on customer conduct and will ban trouble makers that impact their bottom line.
People have a right to be assholes but everyone else has the right to shun them from polite society.
There is no right to be heard, just to speak and not have the government stomp on you for it. No one else is required to listen or host it.
If I were hosting said Free Speech Drive, I would be oblivious to the contents of what my users host on their drives. I would not violate their privacy by spying on them. Their files, not mine.
If your site is widely used and you make it technically impossible for you see or moderate content in any way whatsoever, your site will become a host for real illegal content- not just the borderline examples I gave. As the comment above me notes, even 4chan removes CP. You place yourself in serious legal jeopardy with this decision
There is little legal distinction between a 'publisher' and a 'host', which I understand is part of the pseudolegal gibberish that's part of the 'content moderation is censorship' belief system.
If your physical storage unit is consistently used for illegal drug dealing despite several arrests there, and several warnings from law enforcement, then yes you'd probably be liable for that. If you had significant scale of illegal content on Free Speech Drive, then yes you absolutely do face liability. I guess if you think you have a clever legal argument otherwise, you're free to spend $100-500k on a defense attorney to make that argument after your public arrest, while your name comes up online for 'Illegal Content Provider' for all time
A private business is a private business and they can set their rules as they please as long as it doesn't run afoul of laws (ADA in the states for example).
That you think there is some important distinction where Google loses rights that Ma and Pa's store has shows that you have nothing reasonable to offer.
If you want to know what a "free speech zone" is on the internet go spend time on 4chan. I can't imagine anyone spending long periods of time there without harming their mental health... and that's even with some moderation.
I'm convinced the only way to effectively create free speech on the internet is to tie whatever you say online with personal identification. Not because it will prevent people from saying bad things, but because you could use it to ban people from the internet. (for the record, I think that's a horrible idea... but so is free speech on privately owned servers)
> Jailbait or non-nude pics of someone's teenage daughter, hosted in the thousands
Even worse, imagine there was a technology that allowed you to print those jailbait photos by the thousands and drop them in Times Square. That would be far too dangerous.
Imagine if that technology was used to spread misinformation about the Catholic church.
Ban the printing press! It's too dangerous. It should be controlled by the church and the state!
Even better, get rid of religion and intertwine religious righteousness with politics. Then the state can control it all!
> The issue is that we know from experience, after 20+ years of the modern Internet, that if you make a 'free speech' drive/repository place that's widely available, it will host the absolute worst of the human race. Then, let's say you personally were in charge of said Free Speech Drive- every day you'd get up and hear about people using it for (legal) jailbait photos, Islamic State recruiting, collaboration between extremist militia groups in various countries (including your own), actual illegal content, and so on. Pretty soon the FBI & CIA start contacting you about some of the actual or borderline illegal content being hosted on Free Speech Drive. Do you want to deal with that?
Imagine if there was some kind of website or network that existed for years with barely no rules or enforcement, like image boards that only remove CP or a decentralized anonymizing network with even decentralized payment systems.
That would be the end of the world.
Right, and I left this out of my already-lengthy comment just so it wasn't a total wall of text. If you can easily host your 'censored' ideas on some other corner of the Internet- what exactly is the problem? Why are you entitled to Google's private property, specifically? You've been asked to leave one establishment, and are free to simply go elsewhere.
We've entered a Golden Age for radical/controversial content- totally unthinkable freedom to say or read anything that would've been technically impossible even in the 80s. It's actually the opposite of censorship- never have people been so free to express any view, thanks to the Internet. I'm not really clear the level of hysteria over Google Drive's policies, specifically- 4chan or another site just like 4chan will always be there
> Google's increasing role as an arbiter of right vs wrong
The problem is that (I doubt) Google is really doing this out of some misguided attempt at "protecting" people but rather as a reaction to what they perceive to be what the people want. When America was a very religious (Christian) country, media distributors stayed away from anything that appeared "blasphemous". They didn't necessarily do it because there was a law against it (there were some odd laws here and there, but the media didn't start actually challenging them until religion really fell out of favor), but because they were afraid of consumer reactions. Google (and every other tech company) is doing essentially the same thing here: speaking ill on certain topics is modern-day heresy and they just don't want to be attached to it because they do ultimately fear the consumer.
Even if you found a globally adoptable alternative to google, the same people who pushed Google to ban distribution of "misleading content" would start looking for ways to ban your globally adoptable alternative - at the network level if necessary (look what happened to Parler before they agreed to follow the unwritten rules). At the end of the day, we won't have truly free speech because far too few of us really want truly free speech.
>speaking ill on certain topics is modern-day heresy
The AUP would be more transparent if it simply banned "modern-day heresy". Folks would then be tagged as tech-heretics, and many would wear that badge with honor.
Calling questionable, unproven, unpopular or ambiguous information "misleading"-- it's a doublespeak. Worse, having my cloud drive spontaneously dumping or blocking my data because some algorithm or faceless reviewer disagrees with the content-- that's totally unacceptable as a consumer proposition. Is my Android phone next simply because I'm posting an HN comment Google might disagree with? Seriously, it's completely unworkable from a consumer position for Google to arrogate to themselves that power.
That's not how heresy works. People don't point fingers at you and hiss "heretic!", instead they judge you and think you're a terrible human being who does terrible things so they shouldn't help or associate with you. You can't "simply ban" heresy.
If you're genuinely interested in convincing people across the aisle to stop trying to ban stuff like this, simply yelling that it's "totally unacceptable as a consumer proposition" and "completely unworkable" and "doublespeak" is barely an argument. Evidently, many consumers are accepting it and will continue to accept it.
> That's not how heresy works. People don't point fingers at you and hiss "heretic!"
I mean...yes that's exactly what they did. Excommunication, run out of town, branded, marked, labeled in public, put in stocks, jailed, killed, yelled at, or just straight ostracized. These are all tactics that have been used in the past to label and punish heretical beliefs. They could absolutely still be used and, if you look at "cancel culture" in the right/wrong light, that's exactly what's still happening.
My point is not that people don't ostracize heretics, it's that "heretics" aren't a real category that people identify explicitly. Heresy is not a thing, people don't think "you were a heretic" as the reason they're ostracizing you (look, maybe 12th century peasants did, that's beside my point, we're talking about modern politics). People who want to ban these things don't think of this as "banning heresy", they think of it as "banning a bunch of terrible things to benefit society". If you're just going to dismiss these people's perspective as banning "heresy", you're just talking past them. You're not earnestly engaging them in argument.
In a sense that may be true, but how is that different from a bar owner asking people to leave if they are causing a disturbance by being confrontational with other patrons? The bar owner doesn't want a fight that can damage property. He doesn't want people to avoid the bar because someone is picking fights, when most of his clientele just want to kick back and socialize with their friends.
Maybe the bar owner isn't trying to "protect" the rest of his customers, he's just trying to maintain a profitable business. Protecting his pockets.
You could apply the same logic to dang, who helps keep HN a pleasant place by doing the same things the bar owner is doing. Yes I imagine he is paid a salary, by the management of YCombinator who see HN as one part of their strategy to make a profit, and therefore his motives are equally cynical.
Ok. Honestly, you can probably reduce all human behavior to such simplistic motives if you want. What I see as someone being kind, you might see as a purely Darwinian strategy to get their genes in future generations.
I'm not all that sure that is a helpful perspective, at least not most of the time.
> America was a very religious (Christian) country
America is still a very religious country. It's not as bad as it used to be but it's still pretty bad.
Don't forget that these companies are global. Your example is still happening with pictures of Muhammad. Many companies refuse to host or show them for fear of offending Islamic extremists.
Oh Thank You. That is an interesting take I haven't thought about.
For those us not from US, it this "as a reaction to what they perceive to be what the people want."
really represent the majority as in your example when America was very religious?
Because it seems to me, ( and I know zip about US ) this action only please half and anger another half?
Given the amount of information Google has about its users I feel certain that they know exactly what percentage will be angered by this and what percentage will either applaud it or just not care.
It's not only right wingers but left wing organizations like the Atlantic Council. They assist multiple companies in determining which content is deemed permissible.
I think there's a vacuum here in that society wants someone to intervene when Bad Things Happen, but we either can't agree who that should be or (more likely IMO) the right choice of person/organization just doesn't exist. So you end up with some people/organizations/governments stepping up to increase their power and/or protect their own interests.
I think this is why Zuckerberg and some other big players have called for laws to regulate these things, which seems counterintuitive, but then FB can pass the buck and is more likely to maintain the status quo where they're on top. But until they are more insulated from the risks, they're going to be forced to defend themselves.
This is also probably why Google is advocating for the privacy sandbox and banning third-party-cookies, and staying ahead of the law tech-wise. Such that when the inevitable regulation of the playing field does come, they are sharing drinks and chuckling with the referees, while the other players are still struggling to figure out what their game plan is.
It's also easier for FB/etc to push for laws to be written when they can pour millions into a PAC to get their ideal language into those bills, if not straight up write sections themselves. They can lobby for fines that are lower than profit from acting in bad faith or anti-competitively (who even knows how much money FB saved by buying out Instagram/etc), they can run their own disinformation or targeted campaigns to sway public opinion, or simply minimize anything on their platform to hide it from users. There's a massive power imbalance there between a regular voter and Zuckerberg/etc, even an imbalance between a regular voters who can or cannot vote early or by mail.
I support regulating these groups but that must be done within the right assigned via the constitution, existing precedent where available, and in depth knowledge of how these companies operate and how the tech influences consumers. It's complicated.
Google docs etc aren't even Google's inventions, Google just bought them. I think it's important to emphasize that, to dispel the notion that you need to be a big company to make a product like that.
I agree that you do not have to be a big company to make a product like that, but it seems like you have to be a big company in order to host and deliver it.
> I agree that you do not have to be a big company to make a product like that, but it seems like you have to be a big company in order to host and deliver it.
The tragedy isn't in what you've said, but rather what you haven't said. The implication is already that a product of GDocs/GSheets quality should be free, as part of a large company's moat, rather than a paid for product that people will pay for.
The tragic reality is thanks to these large companies turning what would otherwise be successful standalone businesses, into free additional features.
I've used that word because Steve Jobs famously described Dropbox as a feature. Google has effectively made MS Office a feature. Apple effectively made operating systems a feature, by giving away macOS and iOS for free with their hardware sales.
Increasingly, everything becomes a feature, in search of what? For big tech, it's to sell users attention.
Meanwhile, on the other side, big media is charging us to give them our attention...
Nah. Hosting is cheaper than it's ever been at any time in history, the costs only become a concern if you have lots of users in which case you should be generating lots of revenue to pay for the increased hosting costs.
Have you ever been on-call for any project larger than a toy? If you have, you likely noticed that keeping the whole thing up sometimes takes effort, more effort than meets the eye.
Hosting as in having some code deployed to some machines is indeed cheap. Keeping a large app like g.docs up and running, especially without breaking the bank, is a bit more tricky.
> Have you ever been on-call for any project larger than a toy? If you have, you likely noticed that keeping the whole thing up sometimes takes effort, more effort than meets the eye.
Of course, that applies to literally every piece of production software ever, but keeping a webpapp running really isn't that hard, it's honestly the bare minimum of competent software development, if you have a team of SREs up at 3am triaging the site every night you're doing something wrong. Now of course, when you get to google scale, you will encounter unique problems, but if you're at google scale your business has more than enough revenue to pay for the costs.
My experience as well. Web apps - if made slightly streamlined and lightweight - with thousands of visitors a month is easy peasy on cheap webhosting.
Google is another scale ofcourse. That's like comparing elephants with mosquitos.
Ofcourse there's more to hosting. But that's what the hosting company does! The webhosting landscape - at least here in Europe - is perfect: worldclass technology, local service. I can be on the phone with these companies if there's a problem.
Hosting stuff that hosts content some people find "problematic" has its own additional layers of difficulty. Amazon is completely willing to dump you if they disagree with you.
No you have to be a big company to resist the urge of getting bought out.
Or you have to either have aspirations of becoming a big platform company or a plan to survive and be happy watching big companies push you to fifth place in a category you once dominated.
The question isn't whether you need to be a big company or not, it's where you're gonna get the money.
What you'd need to host/deliver something like Docs/Sheets is: a product team (2 QE, 8 SWEng, 3 SRE, 1 product owner), the product, some cloud infrastructure, and the capital to pay for it. You could go larger than that to build it, but that is plenty of people to run/maintain/support it. Assuming "large scale" is between 1M and 100M users, figure between $750K and $3M for infra. Combine that with median salaries for employees, and you're lookin' at between $1.75M and $4M (before taxes/business fees).
If you use the cheapest infrastructure and labor, you could do it for $500K. (it is mind-blowing how cheap offshore labor is. Google engineers get paid almost 8 times what some of our contractors get paid, for about the same work)
VCs throw that much cash around for a weekend trip to French vineyards. If you can actually get paying customers, even better.
> Google docs etc aren't even Google's inventions, Google just bought them.
That's the same thing as saying that macOS Monterey isn't Apple's invention, they just copied Xerox.
There's years of development on what Google bought and what Docs suite is now and any engineer that had developed a product for years shouldn't say silly stuff like your sentence.
> We should be able to implement services like these, that are free of ads, on globally distributed infrastructure, with no central authority, to have truly free-flowing information.
How naive. The reality is terrible, probably much more than what you can imagine. Let's assume that someone tries to find out teenager victims for cybersex trafficking at a massive scale on your proposed infrastructure with "free-flowing information". How will you stop them from doing so? Is it just a hypothetical scenario? No. This actually happened on Telegram, which refused official government's order to shut down the chatting room on a victim's request.
Please don't underestimate how far average people's malice can go. Both private and governmental intervention exist not just because of "good intentions", but it come from the real demands from the society.
Google could be sued if they helped distribute dangerous misleading health-information.
It's really not about their "good intentions" but about their self-interest.
It's not about freedom of speech either. Everybody has the right to say whatever they want, and Google has the right not to distribute it. Free markets 101.
The fact that Google has de facto monopoly in some areas is a problem that government needs to deal with.
Google (and all other providers) can not be sued for their users distributing "dangerous" or whatever content. This is literally the purpose of section 230. The only exception is the sex trafficking stuff covered under SESTA/FOSTA.
Whatever pressure they're getting to do this comes from somewhere besides the threat of private civil lawsuits.
They can only be sued now because they are trying to filter out content. Therefore anything remaining is intentional.
I think it would be better to not take proactive filtering based on what they think is misleading and to instead rely on legal judgements to remove info.
Misleading people isn’t illegal (in the US at least, currently). There are situations where it is and that would be a good line for Google to take.
Now I want them to be sued enough times to make them stop doing this.
Why do we not allow telephone companies to censor our conversations? Why can't the power company choose not provide power to offices for political campaigns they disagree with?
We afford monopolies certain privileges, but we also require monopolies to have certain restraints.
Many of these large tech companies have become natural monopolies. I think its reasonable to expect similar restraints.
I agree with you on restraints on natural monopolies. That should come in the form of limiting their anti-competitive behavior and probably also harvesting user data unconsentually for profit.
But regarding limiting these companies from being able to decide which speech they don't want to host: I don't think you've thought this out fully.
What speech should they be forced to host? Art? Disinformation? Propaganda? Porn? Spam? Terrorism? For illegal speech: Whose laws should they be forced to obey?
With respect, no speech should be illegal, regardless of how absurd, abhorrent, or inaccurate the speech is to anyone. Free speech is fundamental to a truly free society, full stop.
You disagree with your own statement and you don't even know it!
"No speech should be illegal" - should I be able to threaten people then? That's speech. Should I be able to detail my plans for how I'm going to commit a crime? That's speech.
Should I be able to scream at the top of my lungs in a public space?
Should I be able to use a loudspeaker to broadcast my voice (or an advertisement) to drown out all other sound in a public space?
It sounds nice what you're saying, but it's not what you actually believe, so I kindly ask you to argue less in bad faith.
You're right. We should make exceptions to free speech only for recklessly, knowingly, or intentionally:
-making unreasonable noise and continuing to do so after being asked to stop
-disrupting a lawful assembly of persons
-speech made for the principal purpose of creating panic
-if the speaker intends to incite a violation of the law that is both imminent and likely.
My solution to the original problem would be to make it technologically unfeasible to solve. The architecture of the communication platform should be constrained in such a way that operators & owners are unable to make choices about which speech to include. This is because nobody is equipped to solve this problem, and nobody should be forced to do anything to try and solve it.
Disinformation, and propaganda are just specific types of protected free speech. I disagree with socialism, but communist propaganda is protected speech which should be allowed. Disinformation can be dangerous too, I understand that. But i'm not willing to allow Google or my government to make a decision on what is disinformation.
As for everything else. You should clearly follow the laws of the country you are doing business in.
> Please tell me why a private company should be forced to host content that they fundamentally disagree with.
Prior to the civil rights act, companies didn’t serve certain people based on their race because they disagreed with that race.
It’s not that Google should be forced to carry stuff, they should be forced to not discriminate because they don’t like it. ISIS propaganda is illegal and should be taken down for that reason.
> It’s not that Google should be forced to carry stuff, they should be forced to not discriminate because they don’t like it. ISIS propaganda is illegal and should be taken down for that reason.
Bright-line rules like that worry me. This is, a lot of stuff is subjective -- the line between "legal" and "illegal" isn't so clear or immutable as one might naively guess. So if something more binary -- like host-or-remove -- is tied to such a fuzzy, dynamic determinant, it'd seem to give rise to all sorts of problems.
For example, say we forced big companies to host all legal content, but remove all illegal content, and then we want to know if something controversial is legal (e.g., taxes on Bitcoin back when it was newer). Then someone could post two images: one telling people to pay taxes on Bitcoin, and another telling people to not pay taxes on Bitcoin. Then the hosting-company would have to remove exactly one of those. By contrast, a hosting-company could normally just remove stuff they're unsure about because they're not required to host legal content, sparing them the burden of having to properly determine the legality of everything.
Basically, the problem is that we'd be stripping hosting-companies of their freedom to operate in safe-waters, forcing them into murky areas and then opening them up to punishment whenever they fail to correctly navigate those murky waters.
If hosting-companies become responsible for determining what's legal/illegal, then they'll have reasonable cause to become an authority on the topic.
It'd probably make them more influential and powerful rather than less, because their judgements would carry the implication of legal-determination, and in popular perception, be law.
They'd essentially be elevated to the status of being lower-courts.
I wasn’t thinking they would be responsible for determining legal or illegal. That would be determined by courts and the legal system. For example, libel/slander would require a judgement, not the provider saying they think it’s libel/slander.
It sounds like you're proposing that they have to remove illegal-content, but host legal-content, right?
For example: say someone's advertising a new drug of questionable legality (say, Δ-8-THC). Presumably a hosting-company, unsure of legality, would just take that down for violating their policies -- without necessarily asserting that it's illegal.
But if you're proposing that they can't do that, then presumably they'd be forced into making a clear determination on its legality (as they must host it if legal, and must remove it if not). Right?
---
Actually, just to mix crypto in:
Say someone posts a time-locked encrypted-file (a file encrypted such that it'll open after a few hours/days/weeks/months/whatever), and there's reasonable suspicion that it may contain illegal content -- but a hosting-company isn't sure yet, because it was just posted and no one's completed unlocking it yet. Should they be forced to host it?
Now say that an entire community springs up around this: many of the files end up being perfectly legal, while others turn out to be very illegal. How should a hosting-company react?
Calls to violence, images of chopped off heads, etc.
If it’s literally just assholes saying “ISIS is great” then that shouldn’t be taken down. Just like if two ISIS-lovers are
IMing each other messages about how much they love ISIS and nothing else illegal it should be allowed. I think.
In the US, images of chopped off heads aren't illegal. Calls to violence are illegal, but only in very particular circumstances that it's unlikely ISIS propoganda videos would meet
Exactly this. A lot of the conservative angst about having their speech moderated on private platforms would vanish if they realized the baggage that came along with what they were asking for. The first amendment is extremely permissive, only very narrow limits are allowed and 99% of pro-ISIS speech is perfectly legal.
Not at all. But I would like to see legislation (or some strong rule) that forces huge corporations or companies over a certain market share to host all legal content.
Similar to how television broadcasters have regulations that that force them to provide equal time to all major candidates.
I think there’s some reasonable threshold that doesn’t require small providers to host everything.
Equal time for TV hasn’t been a thing for decades. And it was justified because TVs used the public airwaves. I don’t know how you could write a law that would pass constitutional muster. And it seems very unlikely that you could get a constitutional amendment through on this topic.
Yes. Any company with an user base or influence above a certain threshold should not get to make moderation decisions unilaterally without the input of society at large. That input is called "the law".
The argument would be the magnitude of their impact on how a member of society can search for, view or transmit information is too large for Google to be deemed a private company.
If a private company impacts a nation's states democracy to such an extent that it rivals it in power, they ought to be classified as something else.
So, in your opinion, what should a government force the company to do when they are classified as what you describe? Force them to host all speech? Does that include art? Disinformation? Propaganda? Porn? Spam? Terrorism?
I don't think you've thought out the consequences of what you're advocating for.
If you have thought it out, please explain exactly what speech they should be forced to host and what speech they shouldn't be.
Right. I personally never thought it was that complicated.
Once a company is classified as a public utility (I believe Google is) it should be forced to host all legal content. You tell me what's illegal and I can safely tell you it can't be hosted by Google.
By whose laws? Even within the US, there are plenty of different laws. Should it be an intersection of all laws, everywhere? Only content which is lawful around the world? Or regionally? Should people outside those regions be segmented off from content that isn't in their region?
Moreover, you're saying that spam should be forced to be hosted by these companies, just like our snail-mail protects. Even if it takes up Exabytes of information.
Should people who have their content removed be able to sue these companies for removing it?
They are American companies, they therefore fall under American laws.
Just with Twitter, we know that content is regulated by region. Setting your location to Germany will prohibit seeing certain content from the U.S. Many more such regional cases.
The spam example would be a problem, but it's more of an annoyance to solve than a basic human rights case. You simply cannot have a democracy where segments of the population are barred from interacting with public officials online. Especially when public business (advertising, fundraising, making political arguments) is now a core of online communications.
If you're saying US companies (that classify as whatever you're defining them as) should be forced to carry all legal speech, no matter how terrible or cruel or provocative it is, I'd be okay with this, and that means literally all spam, and that admins would not be able to moderate any legal speech. If it's any less than this, I'm not okay with what you're advocating for.
And effectively this would turn these sites into platforms that are so filled with trash they will be unusable. And the chaotic part of me would love to see that happen. But it means basically the end of these companies to function.
Realistically, I think we should keep to the standard we've had in the past: we can't compel companies to host speech they disagree with, and we should take strong measures to limit their anti-competitive behavior and break them up into competing companies if necessary (like we did with telecom)
I don't want to keep arguing. Mostly informative exchange.
I would say though, Twitters model from around 2012 was extremely open compared with today (remember the Arab Spring?) and in no way was it an unusable, trash/spam laden platform.
Don't be disingenuous. The problem is viewpoint discrimination. Spam isn't a viewpoint. Porn isn't a viewpoint. Libel isn't a viewpoint. We can limit the ability of tech companies to arbitrarily censor points of view while still keeping the platform free of spam.
How? Create a cause of action whereby if a tech company removes someone's content, that person can go to court and ask that a judge determine whether that content removal is some kind of anti-spam operation or viewpoint censorship. You don't let the company have the final say.
The first amendment is going to be a problem. According to the Supreme Court, those companies have the same first amendment rights that you do. Compelled speech is frowned upon.
> According to the Supreme Court, those companies have the same first amendment rights that you do.
No, those companies don't have first amendment rights. Commercial speech has always been more limited than personal speech. If the law worked like you claim, common carrier laws for railroads would be unconstitutional because they'd violate railroad company freedom of association. These laws are, in fact, constitutional, and so will be the laws that stop big tech censorship.
Besides: corporations? Rights? Total bullshit. This country ought to be run for the benefit of its citizens, not abstract entities like big tech companies. A corporation is an artificial construct that can exist only because society --- made up of humans --- determines that the corporation is in society's best interest. When a corporation is no longer in the best interest of the humans that make up the world, screw the corporation.
Once again, a candidate not being able to post to Youtube is not a threat to democracy. Nothing is stopping this candidate from posting this on their campaign site, or an RNC affiliated site or even Facebook where most of their supporters are likely to be.
> Once again, a candidate not being able to post to Youtube is not a threat to democracy.
Yes it is. Youtube is a huge conduit for communication.
Imagine if ABC banned a candidate. The argument that there’s still CBS and NBC are available is not relevant as a major media outlet is favoring a candidate by blocking their opponents.
For small outlets it’s not an issue. But YouTube is the biggest video provider on the planet, not allowing a political candidate would be detrimental to democracy.
Even if that candidate said stupid stuff like “world is flat.” People have to make their decision and as long as we’re a democracy, that choice should be individual.
If you don't have a big enough chance at winning, you don't go to debates
Mind you, news networks recently had a very strong preference towards the incumbent president, aligning their news to match the president's talking points, and having nightly calls to align their messages
YouTube would be literally interfering with a democratic election. If threat is too strong a word, fine. But you can't deny that they are actively participating in public elections. We want that? We want to privatize democracy? I don't.
Yeah google de-indexing all pages criticizing the democrats and the company is also not threat to democracy. They must provide us with a curated set of sound information vetted by the politicians, FAANG and the state department.
Can we at least agree on some basic facts? Google does in fact curate search. Where a page is listed does not depend on how many times the link was clicked. Yes?
>If you want your right-wing search engine, then create it.
And your true intentions come out.
If Google delists AOC from all their properties in the next election and replaces the results with those from her opposition, I'm sure you'll be the first to say it's totally fine and she should just host her own search site if she doesn't like it.
> Please tell me why a private company should be forced to host content that they fundamentally disagree with. Why should Google be forced, legally, to carry Chinese state propaganda, for example? Why should Google be forced, legally, to carry ISIS propaganda?
They already do host them. CloudFlare, the biggest CDN, despite their words and claims routinely censors sites meanwhile defending hosting terrorist site's free speech.
> the company serves at least seven groups on the U.S. State Department’s list of foreign terrorist organizations, including al-Shabab, the Popular Front for the Liberation of Palestine (PFLP), al-Quds Brigades, the Kurdistan Workers’ Party, al-Aqsa Martyrs Brigade, and Hamas.
> CEP has sent letters to Cloudflare since February 13, 2017, warning about clients on the service, including Hamas, the Taliban, the PFLP, and the Nordic Resistance Movement. The latest letter, from February 15, 2019, warns of what CEP identified as three pro-ISIS propaganda websites.
CF claims terrorist organization's websites are free speech:
As for the whole "private speech" argument, so, Rosa Parks should have just started her own bus company too? Discriminating against people based on race was legal after all. And if race is a "different" topic, then how about religion? Religion is based on ideas. These tech companies claim to censor religion based offensive content too. But almost all LGBT content is against all religion. Isn't that offensive too and should be censored too? And just like religion is based on ideas, political opinions are ideas too.
Railroads, telecom, electricity and water companies should be able to refuse service too?
Are you against the FDA, EPA, FCC, FEC, COPPA regulations, regulations of fire insurance rates etc?
How about Net Neutrality? Private businesses should be able to charge whatever they want and for whatever content they want right?
How about the government-forced lockdowns forcing private businesses to shut down and go bankrupt?
And how about the baker who refused to bake cake for the gay couple for religious reasons?
How about the current Administration banning menthol cigarettes, flavoured cigars?
How about government banning incandescent light bulbs?
How about Fauci's emails where he's emailing with Zuckerberg (some of which was also redacted). Fauci is the government and him working together with FB in building their "COVID dashboard" which censored many people, especially those talking about the lab leak theory as well as Ivermectin. Is that not government enforced censorship?
There's a whole community on TikTok (you can find them here on Twitter, too) which scorns people as "fatphobic" for encouraging fitness and weight loss. They falsely say obesity isn't unhealthy. Should that be allowed considering that's also "misleading/misinformation"? Are social media companies that allow it "killing people" (Biden's words from today)? Are social media companies also guilty of "killing people" if they allow content encouraging people to be obese, to consume fatty junk food, content which glorifies cigarette smoking and large amounts of alcohol consumption and a sedentary lifestyle?
Seems like the "it's a private business" crowd is totally okay with government enforced regulations and lockdowns for their political benefit but when it comes to political speech of their opposition, they suddenly discover the "private" business.
SCOTUS Justice Clarence Thomas opined couple months ago discussing big tech censorship quite extensively. The case was regarding whether President Trump was allowed to block people on Twitter and it being a 1st amendment violation. While the case was declared moot as President Trump left office, Justice Clarence Thomas took the opportunity to discuss censorship. How politicians like President Trump aren't allowed to block users on big tech but big tech is able to block and ban government employees and how this creates a weird power dynamic. Here's a few excerpts:
> "But whatever may be said of other industries, there is clear historical precedent for regulating transportation and communications networks in a similar manner as traditional common carriers. Candeub 398–405. Telegraphs, for example, because they “resemble[d] railroad companies and other common carriers,” were “bound to serve all customers alike, without discrimination." ... "Internet platforms of course have their own First Amendment interests, but regulations that might affect speech are valid if they would have been permissible at the time of the founding. See United States v. Stevens, 559 U. S. 460, 468 (2010). The long history in this country and in England of restricting the exclusion right of common carriers and places of public accommodation may save similar regulations today from triggering heightened scrutiny—especially where a restriction would not prohibit the company from speaking or force the company to endorse the speech." ... "The similarities between some digital platforms and common carriers or places of public accommodation may give legislators strong arguments for similarly regulating digital platforms. [I]t stands to reason that if Congress may demand that telephone companies operate as common carriers, it can ask the same of ”digital platforms." ... "For example, although a “private entity is not ordinarily constrained by the First Amendment,” Halleck, 587 U. S., at ___, ___ (slip op., at 6, 9), it is if the government coerces or induces it to take action the government itself would not be permitted to do, such as censor expression of a lawful viewpoint. Ibid. Consider government threats. “People do not lightly disregard public officers’ thinly veiled threats to institute criminal proceedings against them if they do not come around.” Bantam Books, Inc. v. Sullivan, 372 U. S. 58, 68 (1963). The government cannot accomplish through threats of adverse government action what the Constitution prohibits it from doing directly. See ibid.; Blum v. Yaretsky, 457 U. S. 991, 1004–1005 (1982). Under this doctrine, plaintiffs might have colorable claims against a digital plat- form if it took adverse action against them in response to government threats. The Second Circuit feared that then-President Trump cut off speech by using the features that Twitter made available to him. But if the aim is to ensure that speech is not smoth- ered, then the more glaring concern must perforce be the dominant digital platforms themselves."
> "As Twitter made clear, the right to cut off speech lies most powerfully in the hands of private digital platforms. The extent to which that power matters for purposes of the First Amendment and the extent to which that power could lawfully be modified raise interesting and important questions. This petition, unfortunately, affords us no opportunity to confront them."
The last 2 points are important as Justice Thomas is basically saying "give us a case which brings up these two questions and then we will have a deep look."
Also based on recent revelations of things like Press Secretary Psaki admitting that they are flagging content *FOR* Facebook, advocating for censoring any person from all social media if they are censored on only one social media as well as Fauci's emails showing him actively emailing with Zuckerberg on the COVID news which led to censoring of anyone who brought up the lab leak theory, this makes these companies State actions and not private companies.
Do they routinely censor? As far as I can tell they do not 'routinely' do this. As a matter of course, they routinely do the opposite. There are only two examples I can think of where they have censored.
I know there are a ton of people who, like you, want alternatives. Unfortunately nobody is willing to pay for it.
Advertisers (and governments and criminals) pay handsomely for surveillance-driven "free" platforms, but who will pay for the development, maintenance, productization, polish, and support of open decentralized alternatives? Users have been conditioned to believe that software should be free, and the more ideological people in the FOSS movement will tell you it's "not open source" if you don't give it away with no strings attached. I know people who will actually uninstall things if they do not have an OSI-compliant license.
Look at how much work it takes to develop and maintain these centralized systems. Now consider that decentralized systems are more challenging to develop and scale because you have to deeply understand distributed systems instead of just hacking some code to run on one centrally managed 100% trusted platform.
Where is the army of independently wealthy highly skilled developers who are going to do all this unpaid?
I am not optimistic. Nobody pays for freedom, openness, or privacy. All the money and momentum is behind the current user-exploiting paradigm, and now we have a generation of programmers who are learning "cloud native" development and don't even know how to develop things that don't run this way.
Edit:
I once gave a presentation to a room of college kids and was discussing a peer to peer system. A student raised his hand and asked how two devices could communicate this way without "a cloud." He was not aware that it was possible for something to communicate directly with something else over a network without a server.
This problem is not nearly close to being solved, but if you look at the more general area around "crypto"(currency), lots of projects are exploring novel forms of funding development, maintenance and usage of decentralized networks, either for themselves or as generalized solutions[1].
(Obligatory disclaimer pre-acknowledging the drive-by comments when mentioning anything crypto on HN: yes there are lots of scams, yes lots of illegal activity uses these networks, yes blockchain is often used as a buzzword)
I was optimistic about cryptocurrency, but the problem I see is that the bad drives out the good. There are so many scams that the very idea is now linked to scams in the mind of a huge number of people, driving away a ton of people who might otherwise use it for things that are not scams.
It's the "bad neighborhood effect." A few people commit crimes, so a neighborhood becomes known as "bad." The people who don't commit crimes move out. Now most of the neighborhood's residents are criminals.
A lot of financial regulation is about maintaining the reputation of markets so that serious people will use them. If too many scams, bubbles, and other nonsense goes down, the market gets an overall bad reputation.
I think this effect certainly plays a big part in slowing down progress on fundamental research in an otherwise hyper-active field. But the good thing about open technologies is that they can act like neutral tools and not neighborhoods.
Decentralization allows anyone to start their own bubble with however much curation they want. If you want crypto without scams, it's very easy to achieve. Let's not forget how terrible a reputation the whole "world wide web" had just 2 decades ago, and yet here we are complaining how locked-down and ultimately "too safe" it has become.
The space will change with more regard for fundamental aspects of the platforms' technology, than transient feelings about its current ecosystem.
> I know there are a ton of people who, like you, want alternatives. Unfortunately nobody is willing to pay for it.
I don't know, I pay for O365. I know a lot of people that do. And remember; Google Drive is not free either. To get more than a minimal amount of storage you have to pay. As a result, most serious users of these services are already paying for them.
However, we end up paying AND keep getting judgement on our data. It really should be E2E encrypted and these conditions should only apply for files that are actually shared with external people.
However, I've heard many stories of people getting their accounts banned for having copyrighted content on their drive that was never shared at any point.
If it wasn't for the fact that I mainly have O365 for other stuff (email in particular), I would never pay for OneDrive under these conditions. Imagine your computer suddenly going like "oh hey this is a downloaded movie, you shouldn't have this!!" and deleting it from its harddrive. Or worse, even forbidding you to log in and access any of your data.
Ridiculous of course but this is the situation we now have with online storage. I back my OneDrive up every day for this reason.
You can encrypt stuff to Google Drive. You can sync a Truecrypt (http://www.truecrypt.org/) folder or use something like Syncdocs (https://syncdocs.com) that does the encryption/decryption automatically.
> whose best interests are provably not aligned with that of the general population.
Hell always comes when people feel they know what's best for everyone else, and attempt to forcibly implement it.
I'm for people being free to choose what they believe their own individual self interest is. Even if it isn't what others imagine it to be. Even if those others are right.
> We should be able to implement services like these, that are free of ads, on globally distributed infrastructure, with no central authority, to have truly free-flowing information.
Who's going to pay for and operate it? And why won't they be subject to the same pressures that got us here?
Maybe it time we make it easy for people to host thier own clouds… Like Nextcloud. All in one at home server with Owncloud Plug-N-Play. While we are at it, have it ready to go for hosting emails and Federated Social Media.
To be honest, it would be a challange, but totally doable.
This is techno-utopianism. The problem isn't technical. Your grandparents won't figure out how to host their own cloud or use Mastodon. Your friends who work at art galleries or in construction won't figure it out either. The reason centralized services proliferate is that they cater to the vast majority of people who don't care about this stuff.
"How do we limit the harm of misinformation while preserving our freedoms?" That's a political problem. We can't just code our way out.
This is what most people here dont seem to understand. If you are banned from Facebook/Youtube/Google/Twitter, for the mainstream public (That is, 90% of the people) you are effectively banished from the Internet. This is like a candidate not being covered in the NY Times, WP,LA Times, but it is OK because the "Quarterly Express" in "Chinook county" published a 2-pages interview.
ISPs have shown increasing comfort with delisting sites they deem bad. I'm a huge proponent of commoditizing data hosting, but the culture today leans heavily toward "filtering" information.
It's not really culture. It's a small minority of radical extremists who get their own way repeatedly by threatening meltdowns out of all proportion to the severity of the problem, and emotionally manipulating their own managers ("you're a bad person if you don't do this"). And they do it again and again, until the organization starts to collapse and becomes a mere tool for their political agendas.
The best way to push back on this is not some p2p techno fix. It's to systematically start firing anyone in an organization who demands the moral cleansing of customers or colleagues.
ISP Censorship is next on their list… They are going to push normal discorse underground and bad ideas will just fester instead of being natually filtered out in the proving grounds of public discorse.
A "public square" online designation would effectively be the nationalization of whatever service gets that designation.
I, for one, don't think the US government should be in charge of Twitter.
Taking private companies and forcing them to say certain things is not how I'd like this country to evolve, and as long as the 1st Amendment exists, is not how it will evolve. "Public Square" designations for private companies cannot exist alongside the 1st Amendment, period.
I agree, but my solution was self_hosted at home, not on big tecks slippery back. But I guess the same could be said by your ISP.
Although with the increasing use of private gaming servers and streaming, ISP’s are starting to acommadate the gen_pop with decent upload bandwith finally.
Who know, it wont be long untill ISP’s start filtering content on a massive scale.
Hanlon's razor has really messed up people's discernment. There are plenty of amoral, if not malicious, decisions by these companies, and these behaviors are intentional. All you HN commenters think you see the consequences clearly, and the execs making these decisions are doe-eyed innocents led down the wrong path... Are you serious?
> We should be able to implement services like these, that are free of ads, on globally distributed infrastructure, with no central authority, to have truly free-flowing information.
Yes! And the way we do it is simple: amend the US Constitution to abolish copyright and patents and ensure people have the right to intellectual freedom.
Are we, as humans, entitled to a certain level of UX, or are we simply entitled to the ability to share information?
Because if it's the former, then yeah Google itself needs to be replicated and provided for all humanity. If it's the latter, well what does FTP not have?
I'm having a very hard time seeing how one specific implementation of any technology becomes so important that to live without it is to be deprived of a human right.
You need to remember that people actually wanted Google and tech companies to be arbiters of truth and moderate content. This is what people asked for, so the real issue probably has to do with why people are comfortable with giving this amount of power to companies like Google in the first place.
"We should be able to implement services like these, that are free of ads, on a globally distributed infrastructure, with no central authority, to have truly free-flowing information."
Google's job is to take away the motivation to do this work.
Alernatively, if it looks like someone is doing it, hire or acquire them.
You agree not to, and not to allow third parties or Your End Users, to use the Services:
to intentionally distribute hoaxes, or other items of a destructive or deceptive nature;
Your failure to comply with the AUP may result in suspension or termination, or both, of the Services pursuant to the Agreement.
The availability of paid email hosting and document creation, editing and storage is, IMO, an example of the point I was making.
If, e.g., businesses and other organisations were potentially upset about Gmail scanning (remember Google was sued for this and lost) then that dissatisfaction argubaly provides motivation to avoid Gmail.
That motivation could potentially spread outside of businesses and other organisations. Google's job is to take away such motivation.
Arguably Google Workspaces is Google doing its job of quelling a potential uprising, so to speak. They are taking away potential motivation to innovate away from Google and online advertising. (It also highlights the inherent flaws in a "free" services model. For many businesses and organisations, this is not acceptable.)
Meanwhile, the company's core business remains unchanged: providing services for "free" while generating billions in profits through online advertising services.
I always wonder if it will be challenging to get people used to the idea of paying for these services from a non-Google provider. Although Google hoovers up cash hand over fist (hence the high salaries), having a well-polished set of apps that work as well for low cost is going to be a hard pill to swallow for so many folks that are just used to "free email" by now.
I host my own email and pay about a dollar a month and my non-tech family members look at me like I'm some fool for not using the free stuff. Conversations about the value of data just go woosh over their heads :|
The alternative to Google is self-hosted Open Source and Free Software and it exists today. Unfortunately for you it isn't easy to find work on self-hosted FLOSS projects and there isn't much demand for consultants helping deploy those projects.
We developed an alternative to google and the services that it provides. It's part of a growing open source ecosystem that includes NextCloud for file management.
If you want a totally free and open source alternative to Facebook and Google, that you can run on your own servers, you can find it here:
Perhaps. But isn't google (and facebook, twitter etc) also in a bit of a no win situation? They implement these types of measure and they get accused of censorship and being "arbiter of right vs wrong". They don't and they get accused of helping spread fake news and false information.
Putting aside intent, cost etc. What can / should these companies do that'll make everyone happy?
For posterity, readers should note this comment was the OP, initially. Not the support.google.com link. 980+ comments later, HN changed the thread and buried semitones' comment down here. semitones' point was a reasonable and important one: maybe people should think twice about hosting files and other data with a company like Google.
I don't think the answer is alternatives, as you would just be trading one master for another.
What we need is 1) for clients to retain control of their data and 2) standardized formats and interfaces for handling data. If moving off of Facebook was as simple as creating an account somewhere else, then things would be different. If Google doesn't own your data you can tell them how they are allowed to use it.
As for Google, I've given up on anything revolutionary from them. They have a lot of smart people with good ideas but they've grown so big that they can't "move fast and break things" without breaking whole sectors of the economy--and given the number of anti-trust suits Google is embroiled in, I think the governments of the world know it. That said, Google generally seems to at least try to do the right thing, even if no one there seems to know exactly what that is.
> We desperately need to find a globally adoptable alternative to google and the services that it provides
I'm pretty sure Facebook and Twitter aren't far behind in the pursuit of also actively banning content they deem misleading. I don't think it's a Google-specific problem.
> We should be able to implement services like these
Implementation would be easy if we knew how to solve payments for decentralized hosting, maybe with ads as a possible business model. And the payments brokers mustn't be the central points either.
As always, the question is who's going to do better? The Americans already gave it their best shot and this is where they ended up. Certainly not any of the members of the EU, it'd get hobbled by regulation as they've never cared about actual free speech of the kind the Americans value. It's sure not going to be coming out of anywhere in SEA or NAME with the way governments in those regions tend to operate. So who's left to do better at a scale that matters?
I guess you can be somewhat brave here knowing that you can get a new job in about 5 minutes. You might want to consider making a principled stand here, though.
Tools like Skiff[https://www.skiff.org/] are on the path of being that "alternative" which is, quoting from their website "..a privacy-first collaboration platform with expiring links, secure workspaces, and password protection."
Don't quit, organize with other google workers to fight against the policies you disagree with from within until they fire you, then take a fat unemployment check until you find your next gig - somewhere hopefully with fewer moral compromises.
They don't want BS that is harming everyone on their property.
I would ban the same on my property.
That said, there is no reason to ever use Google. Nothing they offer is so critical that you must use them.
I don't use Google for anything and never will because they are a spyware company.
You willingly work for a spyware company and are trying to claim personal morals? How does that work in your mind? Genuinely curious.
> We should be able to implement services like these, that are free of ads, on globally distributed infrastructure, with no central authority, to have truly free-flowing information.
How does that work?
Who pays for it?
Who manages it?
You realize that would create a de facto government if an actual government wasn't running it.
Look at any website that has truly "free-flowing information".
They all end up being a cesspool filled with racists, sovcits, terrorists, and pedophiles.
Even places like Kiwifams has rules and deletes posts and bans users that violates their few rules.
I think Urbit’s first principles approach to this is really interesting.
They’re the only ones I’ve seen that seem to have hit on a reasonable trade off to fix the common issues around federated services (simple administration, keeping versions in sync, first class network between users etc.)
The functional OS design is really cool too - I think it has potential to solve a lot of these issues.
I don’t really care that Google is removing dumb content from their services, but I agree that computing can be a lot more than it is. Users have lost ownership over their own compute. We have PCs, but everything is in the browser on someone else’s servers. Urbit is cool because it’s a way to get that ownership back.
We need and will (and also do) have other alternatives. But as soon as it grows big enough to get public attention, censorship will come. Cause majority wants that.
Almost no one in this industry is actually making the world better. Just take the money and use that to make the world better if you can. Especially since all things considered your role at Google probably doesn’t make things better or worse either way.
I think what we need is a users bill or rights. There should be certain rules a company just can't decide to make, and there should be certain rights afforded to every user.
Without that we're just choosing between which monarchy to rule us.
These alternatives exist though, they are just not popular. We can’t very well force people to adopt these alternatives to make them “global”.
It’s also not clear to me what these alternatives should do differently. Should they just host anything as long as it’s not illegal? I just read an article today about how social media is killing people by spreading vaccine misinformation. So they’re going to get pressured anyways, just from a different source.
So happy we have companies like Google doing moral arbitration versus companies like Walmart/Newscorp doing this. Wait till round two of Idiocracy capitalism, we will miss these days.
The toxicity on HN is a strange combination of vitriol and superiority complex. I can't really think of many other forums that spew such hatred while morally grandstanding to such an extent.
> Google's increasing role as an arbiter of right vs wrong and a steward of information
That is exactly what's needed.
When there's no penalty for bullshit, lies tend to self-perpetuate, like viruses.
You may be fixated right now on some half-baked ideal of "free speech" and so on, but the fact is - a lot of people out there can't tell left from right. And in situations such as the current pandemic, when the consequences of spreading bullshit are thousands of deaths, your naive ideas about "freedom" need a reality check.
> truly free-flowing information
Which will then immediately get hijacked by bullshit-spewing AI bots and folks with agendas. See what happened recently with all the "freedom" emphasizing social networks.
What you're proposing is the online equivalent of a country with no military, no police, no laws and no judicial system. This may be fine in some Ayn Rand fanfic novel for young adults, but it's not a reality where anyone would want to live.
The “public square” is free for speaking because it belongs to everybody.
As long as a speech is said in a private place (e.gr. Google’s servers, facebook’s…) it is subject to money. It will never be free because it is not “public”.
Google, Facebook, Twitter have positioned themselves to be the de-facto public square by buying up their competition and making themselves effectively the only game in town for what they do.
All these services are easy to replace. What isn't yet available elsewhere is the content discovery possibilities provided by Youtube, especially what would be considered "pirated" content on other sites without deep ties to the government (e.g. full CDs, movies that get uploaded and stay on yt for a long time, to be downloaded by millions).
"All these services are easy to replace" - I would like to agree, but I think that statement trivializes the endeavor a bit too much.
For Docs for example: Sure, from a technical standpoint, creating a webapp that lets you modify a document, syncs over new changes on a regular basis, and lets multiple users collaborate on the same doc and the same time is certainly harder than making a hello world calculator, but it's definitely not that hard. Most of us could figure out how to do it, especially since it's 2021 and our hardware is up to the task.
However, what is stupidly non-easy, is getting billions of people to use your solution, and for it to become the "standard" in society, for powerusers and non-powerusers alike, and for it not to have hiccups when a billion people use it. Oh, also, you need to somehow pay for the physical infrastructure that supports it, and the human beings that maintain it (hardware + software).
The situation would be even more complicated for YouTube because of the nature of the content, i.e. a single video on YouTube might be viewed by billions of people, while even the most popular docs are viewed by thousands at most, and even then making a copy of a text document to then distribute off-platform is pretty trivial.
And yes, we can spit out terms like blockchain w/ proof of stake, IPFS, etc. - but the tech isn't even the hard part, it's everything else that's hard (adoption, consensus, complexity for non-powerusers, funding, etc.)
What has to be "standard"? The application itself or its UX? I'm pretty sure it's the latter, so all it takes is a familiar UI and low-friction approach. You don't have to be a world famous brand for people to be able to use it.
Exactly. We only need a "standard" because everyone builds walled gardens and easy content distribution to multiple platforms is intentionally difficult. Right now you have to watch a "YouTube" video, instead of just a video on YouTube.
Standards != centralization. Standards apply more to protocols, and centralization applies more to ownership of data.
For instance, there is an enormous benefit to the entire world from using a standard IP/TCP stack. Yet, we don't seem to suffer from a centralized authority making controversial or conflict-of-interest decisions regarding that stack.
So what I was suggesting is that document-editing abide by some global protocol/standard, which would include storage, versioning, and permissions. As far as the choice of user-facing interfaces and implementations of logic around those standards, there need not be just one, or even a few. Everyone can bring something cool to the table and anyone can use whichever flavor they wish.
To avoid Google Drive/Docs/etc you don't have to boil the ocean. Buy a Synology DiskStation and just use the included software
It's a solution that works for anyone with basic IT skills. Offer your extended family access, and use the provided software to control permissions etc
Also, non-powerusers will gravitate towards the simplest solution to their problem, and that will also tend to be the one that the highest number of people understand how to use.
Deep ties to the government alone won't do it. You'll need deep ties to these content industries, and that will cost you money. Google has it. Most startups probably won't.
Point is Google can offer services like Docs for free and still invest money in continually improving them because it sells advertisement-space on its free products.
Sure you could build your own. But how would you finance the maintenance of your software and its free-for-customers delivery platform? Answer: By either selling advertisement, or by charging people for using it.
The problem really is that by now Google has monopoly. Therefore it is difficult or impossible for you to compete with them.
As a user, I don't need to compete with Google. I just need an alternative. If I needed collaborative document editing (as a replacement for Docs, a feature I don't use), I'd probably try something like https://www.samepage.io/ . My only use of Docs at the moment is for maintaining some stock price calculations and tracking with the =GOOGLEFINANCE(...) function. There's probably some personal finance apps out there that do the same.
Funny enough, I arrived at the opposite conclusion during my tenure as SWE... Google wasn't taking responsibility for the wide sweep of its arms as the gorilla in the room, and was trying to stay neutral ("We just build services; expecting us to be responsible for them is like expecting fire not to burn") in a situation where they fundamentally can't.
Google is already non-neutral... they ban all sorts of content on search and comply with law in multiple countries. It's never a question of "Should Google be neutral," but instead "How neutral."
In theory they can... in reality something bad will happen, someone will have written about it in a google doc, and the narrative will read "...they had access, yet evil/neglectful Google did nothing!".
No. That appeal to some notion of free speech is bunk.
They can “stay neutral” by limiting the ability to share widely. Free speech doesn’t mean that you are entitled to a billboard, radio station, or global content delivery network.
By aligning global distribution with easy content marketing, Google, Facebook, etc created a monstrosity that encourages the worst content and systematically evicerates high quality content.
That fundamental lack of understanding by naive engineers drives a lot of the problems we have.
I'm afraid I don't follow. In what way does it differ?
> Are you saying all online content should be subject to censorship just because it can be accessed worldwide?
I'm saying no CDN is obligated to vend content in violation of its TOS. Content available at drive.google.com is quite different from content hosted at your-site.com: you control the content at the latter. It's your TOS to enforce there.
"A content delivery network, or content distribution network (CDN), is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance by distributing the service spatially relative to end users." That accurately describes Drive's infrastructure. It has additional bidirectional support, but seamlessly transitions to unidirectional distributed broadcast if many people connect to the same file.
> We're talking about a file sharing service used for collaboration
That's one way to use it. Another way is to cheaply host content and then post the link to Facebook or Twitter. It seems Google is reserving the right to not support that use case.
> It would be like AT&T disconnecting calls
Private calls are a different model from a public & shared link. It's a bit more more like a television station cutting a live broadcast if the person on screen starts sweating a blue streak. Though that analogy is also not perfect... Hosted publicly-accessible documents are kind of their own thing. It's not regulated as a common carrier and it's also not a newspaper.
> That accurately describes Drive's infrastructure.
It accurately describes a piece of infrastructure that many (if not most major) services including drive use.
It does not describe drive, but it’s a moot point since there is nothing relevant about calling drive a CDN.
> Private calls are a different model from a public & shared link.
Not at all. There is nothing stopping a phone call being made to a radio station, or placed on speaker to an audience of multiple people. This is analogous to the use case of posting a link to Twitter or a Facebook group.
> There is nothing stopping a phone call being made to a radio station, or placed on speaker to an audience of multiple people. This is analogous to the use case of posting a link to Twitter or a Facebook group.
This is actually an excellent demonstration of why phone calls are a poor analogy for publicized Drive links: in the scenario that you describe, if the radio station has not gotten the consent of the caller to broadcast the call or the circumstances don't suggest the call will be broadcast (such as if someone calls the station's customer complaints line and that line is patched inappropriately into the broadcast circuits), in most states (and under FEC guidelines) the radio station could be held legally liable for violating the caller's privacy.
No such law exists for a Google drive document link.
Shared documents are their own thing and previous models don't really precisely fit them. There isn't really any precedent to say that Google's position here is wrong; arguments by analogy break down quickly. A Drive document can be used as a private scratch pad, a multi-user scratch space, or as a stored read-only piece of data intended for broad dissemination on a reliable content network; the technology doesn't distinguish the use cases, and Google applies more or less the same policies to all of them.
Can you provide a precedent for Google's position being wrong other than reasoning by analogy from other telecommunications infrastructure (which we already treat differently from Google Drive shared documents)?
[Edit: your comment changed after I posted mine]
Oh, if your primary concern is that Google reserves the right to get into private collaborations and modify access to those documents, that's something other than what I'm arguing. I'm personally ambivalent (their house and their rules... If you don't like the rules don't live in the house), but I can see why people would have more concern with that situation than the situation where they block wide-cast sharing via a view-only link.
> Can you provide a precedent for Google's position being wrong
You don’t need a legal precedent to determine what is wrong. You are making the logical fallacy of affirming the consequent here.
> Oh, if your primary concern is that Google reserves the right to get into private collaborations and modify access to those documents, that's something other than what I'm arguing.
They do, and ok.
> I'm personally ambivalent (their house and their rules... If you don't like the rules don't live in the house)
This is a sentiment that I generally agree with as life advice, but is a non-contribution to these kinds of discussion.
> but I can see why people would have more concern with that situation than the situation where they block wide-cast sharing via a view-only link.
There is no difference between a small scale collaboration and what you have chosen to label a ‘wide-cast’ link, other than how many people read the document.
That's not actually neutral; that's letting harmful content spread at the speed of Internet discourse. We already know, for example, that they aren't going to allow Google Docs to be used to wide-cast child pornography. They never have. It appears the only change is that they're adding new categories of misinformation to the "harmful content" list.
When you have a reach like Google, neutrality isn't an option. And their mission isn't neutrality anyway; it's "To organize the world's information and make it universally accessible and useful." Dangerous misinformation coming from a google dot com domain isn't useful.
My mind boggles that people have opinions like this. It is so anti-liberal. Every evil regime in history takes it upon itself to define and eliminate hateful content, using the contemporary unconscionable act to justify this evil.
To be a good liberal, Google has to decide if it's a publisher or platform. IF it's a publisher, then the hateful content is coming from them and they must take editorial control of google docs and whatnot. If they're a platform provider, then they get the same protections as telephone network operators and others from the actions of their users.
This current situation is the start of the road to tyranny.
>Every evil regime in history takes it upon itself to define and eliminate hateful content, using the contemporary unconscionable act to justify this evil.
>This current situation is the start of the road to tyranny.
I'm not a historian, but I can't think of many examples where an evil regime gained power by slowly "boiling the frog" and gradually eroding free speech rights.
Communist dictators typically seized power in a violent revolution spurred on by deliberate misinformation campaigns.
Many of the Middle-Eastern dictators seized power in a not-so violent revolution (at least, not violent enough to cause
a full-scale civil war) spurred on by deliberate misinformation campaigns.
The colonial powers maintained their grip on power by denying education to the natives, and when they left the dictators that took their place seized power by violence even in a Western legal framework.
Maybe the Nazis did this (I'm honestly not sure), and maybe Julius Caesar too? But this doesn't seem to be a set-in-stone, guaranteed way of seizing power.
CP is already illegal and google doesn’t need to arbitrate whether its illegal or not. Blocking CP is neutral and a basic.
It’s when google decides to block things that aren’t illegal that they get into the weeds, and companies framing opinions they disagree with as “Dangerous misinformation” is itself dangerous misinformation, and a net negative on society.
Google should not be arbitrating truth. They are not qualified, not capable, and not honest enough to do it, and they will never be.
I don’t know a single person in real life I would trust to censor what I can and can’t see, and I trust google much less than those people I actually know.
It's funny to me that , in my mind HN = SV and SV is hyper liberal and listening to NPR which is also fairly liberal all the shows I listen to are calling for exactly "Google and Facebook need to ban all speech we don't like"
this isn't Google's problem. It's a society level problem. Google is just responding to the pressure
Modern liberal is generally fairly pro-censorship; pro-authoritarian. The word's definition has just flipped in recent years, so it means different things to different people.
Google doesn’t want to eliminate its cash cow, and doesn’t want to be associated with crazy fringe people. It’s pretty simple really.
Running this stuff through a “liberal” or “conservative” lens isn’t productive. Big public companies care about making money and eliminating risks associated with doing so.
I think you underestimate the desire of people to conform, to push their political agenda through their work, etc.
Big companies aren't at risk from losing money for quashing unpopular speech - it's exactly why unpopular speech is the speech that generally needs protecting.
That's narrative not reality. There's plenty of blowback but that's why it's important to protect unpopular speech. Its why the ADL argued the nazis should be able to march in milwaukee. Because when you create a mechanism to censor and restrict you have to know that it can be turned against any speech those in control find they don't like.
Are nazis marching near the homes of holocaust survivors disgusting? Sure. But there's a reason why it should be allowed if you and your children and their children are going to experience fundamental freedom instead of the arbitrary wishes of a government that may replace this one or the next one or the next one.
Sadly today we live on a world where the town square is digital but he government has failed to declare it common carrier. That is a failing of society and a reason just a few people can control the speech of billions.
You are technically correct based on current legal precedent but only because of the government's failure to regulate these companies not because it's inherently so.
If Milwaukee had been a company town they would still have been required to allow it, but because of weak politicians failing to ensure free speech is preserved we have arrived at a point where Mark Zuckerberg or Google's trust and safety team can arbitrarily ban speech for billions without any consequence or legal challenge.
Meanwhile just the other day the white House spokesperson was asking why if someone is banned on one platform they aren't banned on all of them automatically. If you don't see this dystopian future just over the hill you never will until it's too late.
To reframe, perhaps we live in a society where anyone who wants to can join the town square by putting up their own website, but almost everyone would much rather hang out in the hotel lobbies of Facebook, Google, and Twitter because the amenities are much nicer and those companies hand out free megaphones (in the form of interest-surfacing algorithms).
Is it "censorship" when they simply choose not to be the medium to communicate that data to you?
If so, that puts the entire search apparatus in the category of "censorship," since it makes opinionated decisions regarding what the answer to your query should be. Choosing to refrain from vending a Drive URL is basically the same thing.
Edit: I could see an argument that they're being a bad steward of other people's data if they choose not to honor share requests on content they host or choose to remove content they've previously hosted. In which case, I'm glad they're putting the fact they'll do that right in a public disclosure, and it is something people should consider when choosing Drive to host their content.
The search apparatus is exactly the target of those trying to implement censorship.
Six months ago saying covid-19 originated in a lab was verboten on many platforms - saying it on facebook or YouTube would get you called a purveyor of misinformation and the content deleted and and your account at risk.
Suddenly it turns out the people involved at the government level funded the work exactly, and they worked with these companies to define what was misinformation, and suddenly Jon Stewart is making jokes about it and these companies allow it to be talked about again.
If you don’t understand how dangerous the platforms that house our public speech banning some speech based on “misinformation” is you aren’t paying any attention. They have set up their systems to detect/downrank/remove arbitrary content and that will be used for political reasons - it already has been and quite recently.
We live in dangerous times and a lot of people are oblivious.
It appears the CDC had been inaccurately portraying the deaths rate(s) (I assume unintentionally). Particularly, it appears there's a significant number of unexplained deaths. That could be "misleading?" because I regularly collaborate with and we update the data.
"misleading" does not mean not inaccurate. Often the context matters and how is Google going to take this into account? We have multiple theories and discuss them, find more information and put it together.
> Guess it contradicts the official position of the current administration (not reality). So will my drive content be removed?
Below is the actual policy. What term do you think your blog post violates?
> Do not distribute content that deceives, misleads, or confuses users. This includes:
> Misleading content related to civic and democratic processes: Content that is demonstrably false and could significantly undermine participation or trust in civic or democratic processes. This includes information about public voting procedures, political candidate eligibility based on age / birthplace, election results, or census participation that contradicts official government records. It also includes incorrect claims that a political figure or government official has died, been involved in an accident, or is suffering from a sudden serious illness.
> Misleading content related to harmful health practices: Misleading health or medical content that promotes or encourages others to engage in practices that may lead to serious physical or emotional harm to individuals, or serious public health harm.
> Manipulated media: Media that has been technically manipulated or doctored in a way that misleads users and may pose a serious risk of egregious harm.
> Misleading content may be allowed in an educational, documentary, scientific, or artistic context, but please be mindful to provide enough information to help people understand this context. In some cases, no amount of context will allow this content to remain on our platforms.
I can see how information that contradicts the CDC falls under "Misleading content related to harmful health practices: Misleading health or medical content that promotes or encourages others to engage in practices that may lead to serious physical or emotional harm to individuals, or serious public health harm."
Of course, it wouldn't actually fall under anything since it's not misleading, but such things get interpreted by social media censorship boards as misleading.
Both interpretations seem like mighty big stretches to find this particular content as noncomplaiant with that policy. IMHO, if the policy gets stretched like that, then the issue isn't with the policy itself.
I mean, you could make similar stretches to hypothetically ban discussion of tax increases, because of the serious emotional harm that would cause to wealthy people fearing the loss of their money.
Honestly, the only issue with the actual text that I see is the reference to "emotional harm," given how subjective that is and how certain ideological propositions can be medicalized via that route. The rest of it is very reasonable, especially the paragraphs about civic processes and manipulated media.
"mighty big stretches" are actually fairly common. Ever been in hostile contact with, say, law enforcement?
Marijuana in the US is federally banned because the Supreme Court ruled (in Gonzales vs. Raich [0]) that planting cannabis privately for your own use can have impact on interstate trade and thus is a matter for the Feds. If this isn't a mighty big stretch, then what?
And if SCOTUS can do that, Google can do that as well.
It's not a stretch because it's already being done. The reason that it seems absurd that Google would ban discussion of tax increases as a public health issue is that there aren't anti-tax groups who are already (as in right now independently of Google) calling tax increases a public health issue. Calling gun control a public health issue may be absurd, but it's an absurdity which is in active use in the real world.
If the Biden administration put out a press release stating that taxes were a public health issue, there might be more reason to worry that Google would use "public health" to censor "misinformation" about taxes.
I present arguments that could be considered "misleading" based on the administrations official position. Personally, I'd like to actually fix the issues, to do so, we need to discuss the issues. With that, I wrote something we can use as a framework to discuss the issues.
----
I simply deep dive into the data and found interesting results that differ (this is just a random selection):
(2) White, Hispanic, Asian populations have one of the lowest firearm homicide rates in the world. In contrast, the black population has one of the highest firearm homicide rates are very high, which pushes the U.S. average up. (which arguably could support the systemic racism theory, but is a fact) https://austingwalters.com/firearms-by-the-numbers/#Comparin...
(3) The CDC & FBI crime statistics show that <0.5% of the population is murdered by firearms in a given year (~1-1.5% if you include suicides).
(4) Self-defense homicides are included in the data
Sorry but your content is misleading. Take for example the following quote, from Amnesty International, which cite in your article:
> governments [with] poor regulation of the possession and use of guns lead to violence and that they must tackle this now through strict controls on guns and effective interventions in communities suffering high levels of gun violence.
From this say the following:
> The key statement is:
Guns lead to violence
The statement above implies a couple of things:
1. Gun volume and violence are correlated
2. As the number of guns increase, violence increases
—————
This is a blatant distortion of what that quote from Amnesty is saying.
That are clearly saying that _poor regulation of the possession and use of guns_ leads to violence.
You are quite obviously engaging in bad faith arguments.
I think you’re splitting hairs, and this is my point — you pulled one quote out of a many thousand word document. Not only that, you suggest dismissing all the actual data based on this quote. This is an introductory quote that has no bearing on literally _anything_ presented. The framing of the discussion, perhaps, but not the data or conclusions.
In terms of why I thought this was interesting, the key statement is:
> poor regulation of the possession and use of guns
Meaning, the regulation of possession (I.e. reduced number of guns in peoples hands, particularly of certain classes of people) and use of guns, i.e. when they can be used (open carry vs conceal vs home only, etc).
There’s no bad faith, this is literally what they said. I then expand and look at factors that should have correlations to the above.
What I did is look at the data we had. The data actually indicates the places with the strictest gun laws in the US have the highest firearm related homicides. This implies regulations aren’t the issue (enforcement perhaps?). Regardless, the rest of the thousands of words explain and explore the topic.
> It also includes incorrect claims that a political figure or government official has died, been involved in an accident, or is suffering from a sudden serious illness.
In other words, it bans correct claims that Hillary Clinton had health problems, and that Joe Biden has dementia.
>> It also includes incorrect claims that a political figure or government official has died, been involved in an accident, or is suffering from a sudden serious illness.
> In other words, it bans correct claims...that Joe Biden has dementia.
Do you have a source for that that isn't blatant uninformed speculation, hopes and wishes, or a doctored video [1]? Preferably one that is well known and has enough credibility to not be banned from Wikipedia.
Also, which of these is dementia?
1. death
2. an accident
3. a sudden, serious illness
[1] IIRC, during the election there was one pushed by the Trump campaign that was misleadingly edited and slowed down to create the false impression that Biden had dementia.
I mean - Who gets to define 'misleading and/or confusing'? Google? A court case?
If they (Google) want to impose restrictions such as "Do not distribute content that deceives, misleads, or confuses users" - Might I suggest that they apply those very same standards to their own behavior.
* But who can watch the watchmen? *
I am grateful that I have a symmetric ftth connection. Information (correct or incorrect) needs to be free - as in Free to be expressed; Free to be ridiculed; Free to be disseminated; Free to be discussed; Free to be exposed to the light of day.
It is for the people themselves to decide what they do or do not believe.
Do I want Google to decide what is or is not misleading or confusing? Eh??? Say Whhaaatttt!!! Ever read a contracts terms and conditions? They can be as confusing AF… so uhhmm Yep – Let’s Ban ‘em! Woot!
> Who gets to define 'misleading and/or confusing'? Google? A court case?
They claim that anything "misleading" is "includes information [...] that contradicts official government records."
This is a wildly Orwellian way to dictate "truth" : it's whatever the government body says is true. (1984 _literally_ has a "Ministry of Truth")
Authorities can be wrong, and can _themselves_ be incentivized to mislead. Why place the center on them vs. individual responsibility?
Helping people understand, weigh, and index information and sources is an important problem - but solving it in this way is _absolutely not_ the right way to do it.
That's not correct, the text with more context is:
> This includes information about public voting procedures, political candidate eligibility based on age / birthplace, election results, or census participation that contradicts official government records.
There is no wider applicability of "government records" specified.
So, why is the government considered infallible precisely on those topics, if it is tacitly understood that it can actually make mistakes elsewhere?
Also, the formulation feels strangely US-centric, even though people use Google Drive all over the world and the formulation does not explicitly say which governments are considered as infallible in their census records. With no qualification, "government" can well be government of Iraq, Hongkong or Belarus.
> I am grateful that I have a symmetric ftth connection. Information (correct or incorrect) needs to be free - as in Free to be expressed; Free to be ridiculed; Free to be disseminated; Free to be discussed; Free to be exposed to the light of day.
You have a symmetric FTTH connection for now. Host something somebody considers too offensive and they'll get your connection pulled. We don't hear about this because essentially nobody hosts their own stuff at home, but all it would take is identifying the ASN of your IP address and making a sufficiently loud noise on Twitter.
You are absolutely right. If I ran un-encrypted file sharing or torrented something that matched a hash somewhere in some system, yes, absolutely my connection would be shut-off. While I could debate the rights and wrongs of that I take the cowards approach and just shuffle random bits of bytes backwards and forwards.
I think Google would agree with you. If you're going to share information that (in Google's perception) is wildly misleading, they'd prefer you do it off a domain that doesn't have "Google" in its path. And all of us are free to do that.
I think Google (and the rest of FAANG) is wrestling with the uncomfortable possibilty that they are pawns in several nation-states' disinfo campaigns and they suspect their previous lack of intervention and professional having-no-opinion on questions of fact made everything worse.
It's possible that "Don't be evil" means "exercise control over the things you create." Frankenstein's monster wasn't created evil... It learned cruelty after its creator abdicated responsibility for it and it was exposed nakedly to a cruel world.
Google can do a much better job of making clear what is out of scope. When I first saw this headline I was also surprised and outraged.
Looking more closely, this is just about distribution, probably in terms of "content hosting". It doesn't target individuals or families storing whatever they want for themselves.
This made more sense in that context.
For example, if I create a fake video urging people to vote illegally, or at the wrong time, and I am sharing it through Google Drive with many many people seeing it, Google wants a policy prevent that sharing.
Otherwise either it's hands are tied or it's just doing arbitrary things. Which is far more authoritarian.
If a document is shared and accessed by thousands of people, it makes plausible sense that Google might not want to essentially be a hosting service if that content is leading to real-world harm.
...but this has not been made explicit enough for such a sensitive issue, with real speech and free expression concerns. (and there are real concerns as always about who decides what is misinformation)
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances
What has that got to do with the private company Google?
Simple solution: Google can delete whatever it wants for being "misleading content" but if they ever get it wrong, even in the slightest, 10% of Google's wealth is transferred to the aggrieved. If Google is so confident in its ability to be the sole arbitrator of truth for humanity, they should have zero issue with agreeing to this because it will be zero risk.
Of course they will never agree to such a condition because they know they aren't capable of even figuring out how to make products that the market wants much of the time. The Google Graveyard is proof of this. They sure as hell aren't capable of knowing more about virology than the virologists of the world who still are trying to figure things out.
Tech companies wanting to be in control of human ability to communicate with each other is simply a power play. Combat it with risk and they'll have to back down or suffer the consequences when they make a mistake (which they will, often).
You'll find that not many are for Google being so dominant and thus being able to limit potential information (or misinformation) spread, but you'll find almost everyone being against actually taking some of Google's wealth for doing so as they're still a private company and can do whatever that they want with their services, even if that's deleting information or removing files with the letter 'x' in their file name.
I suppose limiting sharing might be justified. But limiting access and/or deleting own files is outrageous.
For that matter, collections of misleading content can have legitimate purposes. Such as research.
Now I'm seriously considering dropping my Drive subscription.
Update: I got a lot of upvotes, but tbh I may have misunderstood the new policy, which does seem to (maybe?) only be limited to distribution of misleading content. I do think Google has a legitimate interest in regulating use of it's services as means of distributing information. I'm not sure where the line is, though. For example, if I sent an email to a friend in which I said something that isn't true, I think I would rightly be upset if Google refused to deliver the email. OTOH, if I was sending this to large numbers of people regularly, as part of some kind of misinformation operation, then maybe blocking me would be a legitimate. Complicated.
A recent high profile example is the lab leak "conspiracy theory". Facebook block discussion of the possibility that Covid-19 started from a lab leak, but now scientists are seriously considering the issue.
Imagine if Google had this policy last year, and a scientist posted a Google Slides presentation about the merits of the lab theory (and imagine it was banned under the misleading "health" information umbrella) ....
Disclaimer: I use dropbox (mostly since they have supported linux since when I started using the service, and when I started using the service, the syncing was much better than google drive) But now I have another reason!
Something similar that is happening now is the Ivermectin discussion, which right now, to some sounds as crazy as the lab leak, to some, sounded a few months ago. So we have learned nothing from the lab leak debacle.
Rarely does anyone have a problem presenting a majority position as the truth with no evidence.
We should be a little more forgiving of minority positions and a little less forgiving of majority positions, so that the truth can be determined more readily.
A reminder, at the start of the pandemic, the majority position was "Masks are bad". The "Masks are good" minority position had no evidence to back it up besides common sense, yet it is only through the efforts of those people fighting the conventional wisdom that the truth came out.
Google will gladly distribute public PDFs that say the elimination of Jews "must necessarily be a bloody process," presented as truth with no evidence.
Why do they get to be the arbiter of what information deserves presentation as truth with no evidence, when they allow Mein Kampf to be presented as such?
This is all assuming that setting a file to public in Google Drive is considered "presenting it as the truth with no evidence."
I agree with this to some extent, how direct should the harm be before you stop distributing it?
I don't think there is an easy, obvious line there—direct calls to harm (murder these people) shouldn't be accepted, and I believe even beyond that there should be limits, but find that line and enforcing that is very hard.
You are downvoted but this is exactly what happening in the Arabic sphere of discussion and nobody really cares because Google, like most of the discussion here, is very America centric. This is not even about truth but rather about calls for murder which are against the law in many countries.
In some way though, I am happy that this "diversity of thought" exists because that's the only thing that can stop companies like Google from becoming the arbiter of truth. It is just sad that a lot of those countries are also ok with calls for murder and violence.
That is a rather curious argument. Google's management come from Silicon Valley and statistically will have a left bias. It is quite likely that a substantial number of them genuinely believe that, say, attending a right-wing rally is similar to presenting Mein Kampf as truth with no evidence. If they start censoring one it is pretty likely they will start censoring the other.
Also, if they use controversy as their metric instead, they're going to start squashing minority views. It is actually much harder than you might think to decide what is and isn't obviously true, particularly when there is a team of moderators involved (or team of people calibrating the robots which is maybe how Google does it).
Maybe it should be the scientists and journalists exploring that possibility, not random Facebook conspiracy nuts? Once an conspiracy is broven true, then FB can let it ride.
First of all it isn't "proven true". It's just not proven false. Which has been the case the entire time.
One of the reasons it was assumed false was that the scientific community bought into the original letter from Peter Daszak. It turns out Daszak has conflicts of interest regarding WIV and GoF research that, in theory, could have caused the pandemic.
It is incredibly naive to allow people to be the gatekeepers of information that is inconvenient for them.
The same scientists who lied about mask effectiveness at the start of the pandemic which may have contributed to thousands of additional cases and deaths?
The same journalists who said lab leak theory was "racist" (NYT's chief COVID editor made this claim less than a month ago)?
The same organizations who said COVID was not transmisable human-to-human in late January, following orders from the CCP (the WHO made this claim repeatedly, without independently verifying it)?
I remember the original theory going round that was getting removed being that it was a lab leak of a bioweapon not the current theory that it was more normal 'gain of function' research that got out. Those are two very different accusations.
If we are honest however, these are theories however "unfounded" they might be...
Science is a continual pursuit of what is accurate and even then when we have a scientific proof, it's often not always permanent.
It was a theory that this came from a lab. It was perhaps another theory that the lab was working on this perhaps in part as a bioweapon. We can get into the politics of why China in general can't really be trusted and so who the heck knows... and that of course leads to skepticism and theories such as these.
These theories may or may not be true the same way "covid-19" came from a wet market may or may not be true. It's okay to say we don't know, here are some theories... That's science.
We should try to get to the bottom of it so that whatever happened is exposed and we can learn from this.
A third party getting in the way and deciding based on who knows what, what theories are okay to talk about and what theories are not okay to talk about is a horrible idea.
Perhaps Google is working on a multi-billion dollar deal with China to put Google in all of the phones made in China? You better believe that if China told Google, "Hey this theory is false (trust us), you better take down anything that talks about it"... Google would be very pressured to comply.
How could we tell the difference without knowing the motivations of the people responsible for it? Has their conduct been likely to inspire confidence that they were on the up-and-up? Gain-of-function research is used for offensive as well as defensive purposes; it is inherently dual-use, and it can be hard to draw a line between the two. We aren't in a position to be able to exclude the possibility that this research was being carried out with the intention of being transferred to pure weapons development once suitable candidate pathogens had been developed.
They are exactly the same accusation, with the only difference between them whether that researcher is your friend or your enemy. It is, much like security research, easy to predict but hard to be certain which is which until it's been deployed.
I'm not defending Google's policy here. But imagine for every lab leak that Google falsely blocked, it also blocked many dangerous conspiracy theories before they got out of hand. I think is boils down to numbers and I think Google is probably in the best position to have the most accurate numbers, the question is are they also making a decision that is most consistent with those numbers.
There's a big difference between earnest scientific inquisitiveness and at a minimum conspiracy nuts spreading racially biased clickbait, at max coordinated political propaganda to influence an election.
though covid is interesting in other ways too. like vaccines.
I'm ok with allowing someone to think whatever crazy bs they want.
But I'm also fine with removing the 'megaphone' for words that are dangerous to our health. You're allowed to stand on a soap box in a public square, but you're not guaranteed to have your speech boosted on the giant billboard screens above.
The dilemmas we are facing here are inherent to how these platforms are setup and optimized.
The algo is designed to boost the crazy-ness to infect others, therefore stopping the spread of this danger tends to stop the individual speech in terms of how normal posts/content works.
Shadowbanning seems like the answer, and that seems to be what Google is doing here. Though not allowing for instance a public doc url isn't in the shadows.
The key start of the policy is "Do not distribute." Which is why I don't understand why people are up in arms about.
it's a totally different thing if Google goes beyond this and just deletes private content from some algo filter.
Lab leak theory has always been taken seriously but it was also largely hijacked by racists at the time who were much louder than the scientists. If memory serves.
Many of the components of the abuse page talk about "distribution", but the one about hate speech is simply "Do not engage in hate speech".
The way I read that, they could conceivably boot you just for having a private diary of racist rants. I don't know if they intend it that way; it's also possible that they'd interpret "speech" as meaning "speech where somebody else can hear you".
But if you were looking for more reason to drop your Drive subscription, that might be it.
The Terms of Service are written broadly and that's on purpose so that Google can take the liberty of interpreting things however they please. Ultimately, it comes down to the culture of the team that deals with content scanning and take-down requests. I'm willing to bet money that it is staffed by Progressive Bay-Area folks and the implications should be self evident.
Even more problematic is the difficulty current algorithms have in distinguishing "engaging" in a certain class of speech acts from studying said class. This requires a subtlety of parsing humans are (sometimes) capable of, but machines still struggle with.
The headline is misleading. This makes it sound like anything that's misleading is banned which is false. They are banning very specific things that are veritably false like spreading fake election rules, spreading false election dates, etc.
These things have happened:
>The Trump campaign has sent Facebook advertisements to tens of thousands of voters in swing states, erroneously telling them it was election day after the social media group’s ad blockers failed to detect messages that violated its rules on misinformation
Is the election date a matter of opinion? I don't think that lying about what day is election day is protected speech- its malicious and attacks the very foundation of Democracy.
Chiming in on your update. It's my understanding as well that it's only related to distribution of content. There's nothing keeping you from storing misinformation on your personal Drive, but if you're linking to it from Facebook and using it to, say, sway a political election, cutting off others' access to it makes some sense.
I looked closely but I cannot find anything that says that paying customers are excluded. So I would assume it is included.
It makes sense, though, to enforce one single policy: otherwise, just becoming a paying customer would be a loophole to circumvent phishing etc policies.
I guess the crux here is when, exactly, they start scanning the content. Is that perhaps the moment the content is first shared?
Their policies for Workspace accounts has tight restrictions on what they are allowed to use your data for. So they shouldn't be "datamining absolutely anything that they can get their hands on".
There is a difference between "datamining" and scanning. they could very well be scanning files that were shared by workspace accounts. They just wouldn't be allowed to use the verdict for anything other than flagging the content as a ToS violation.
Maybe they will compensate you with discounts on some of their upcoming offerings like Google Re-Education Camp or Google Kulak Liquidator. Kids, are your parents tainted with dangerous Bourgeois sentiments? Get a free $10 Amazon Gift Card for every tip you make on Google Informant.
I’m tempted to make a bunch of honeypot accounts just to see how Orwellian and/or gameable their system is.
What will trigger it? Fan pages to Trump/Qanon? Antivaxxer propaganda? The German federal police paper on how to clandestinely manufacture heroin? Or maybe just a 20mb text file with the “卐” character?!
If you do they will figure out your real identity and will terminate your real account without warning. I highly suggest you move everything out before that.
why consider it when you can do it? Behavior can only change when there's a massive drop in MAU and revenue, then the product team will consider backtracking
Because it would be a major inconvenience for me, that's why.
Update:
I'm getting a lot of criticism here.
If your aim is to try to get me to actually make the switch, then the approach you are taking is counter-productive. Instead, I guess you are doing what most people do when faced with something they disapprove of: they do the thing that is easiest and most satisfying, which is to criticize or punish the other person, and to pat themselves on the back for being better.
But criticizing people just isn't an effective way to change people's minds or behavior.
What would work in this case? Showing them a path forward.
In this case, I presented a real problem, that it would be a major inconvenience to switch. That is the truth. I don't live in some idealized fantasy world, where I can take actions for free.
Bang on. I think one of the best things we can do as engineers is help build alternatives to these authoritarian companies' tools, and to strive to make the on ramp as easy as possible. As it stands right now, it is a lot of effort to use Libre tools when compared to the poison pill corporate ones.
The second best things we can do is support projects and be vigilant against the creep of corporate interests into FOSS.
I dropped Google Drive late last year and it wasn't too bad. Setup another service, made a directory junction from GoogleDrive folder to new folder, profit.
ah, so you're just talking the talk, without preparing to walk the walk... things don't improve because you just wrote a comment on HN, it has to materially impact the business for them to notice the disagreement with the policy.
Most people are like you though, too lazy to do anything and they'll subject themselves to whatever big G says is good for them.
Beware of extremist/polarizing arguments like this one, they're mostly manipulative.
You can voice your disagreement, and still use the platform as long as they don't cross the line. That doesn't mean you haven't prepared a plan for degoogling your life and accept the inconveniences.
> I suppose limiting sharing might be justified. But limiting access and/or deleting own files is outrageous.
It's definitely not just a hypothetical risk either. In the last month, Facebook blocked me from distributing my blog post on Django best practices because they claim it violates their community standards. It's literally a post about how to structure API code, but now they're claiming that it's advocating for genocide or something and there's apparently no one to appeal to or any way to reverse this.
Twice in the last week a comment of mine got blocked on Facebook for violating policies. In the first case, I posted a link to an EPA website and a screenshot of a graph posted on it, showing a reduction in smog in the last 10 years compared to 1987. In the second, I pasted a screenshot of something from the CDC's COVID Tracking site and a link to a British research paper on the efficacy of the vaccines wrt the delta variant. In the second case, my link was broken somehow, which might have contributed to the problem somehow?
Throwing in my comment like a penny down a wishing well:
Google recently turned off my gmail because my google drive quota was exceeded. They were "protecting" me from myself, by denying me the ability to buy more storage. They refuse to accept any payment method until I deliver a passport photo to them.
The problems here go a bit beyond them being the steward of information or arbiter of right vs wrong. They'll deny you service for any or no reason, and leave no recourse when their algorithm malfunctions.
That said, they can ban distribution of misleading content. It's their right, and no one can dispute that. I'm not sure what people hope to accomplish by expressing outrage over this. We wouldn't want providers not to be able to decide what content to distribute.
They refuse to accept any payment method until I deliver a passport photo to them.
That seems like something worth supporting with screenshots and posting as its own thread. I've never heard of anyone needing a passport photo for a credit card transaction, at least not in the US.
It's passport photo or government-issued ID (mine is expired), which rather narrows down the options. They also want a photo of some bill mailed to my home address with my name on it, and we've been in the process of moving for several months. I have a garbage utility bill coming to our new place, just for big G, so hopefully that'll satisfy them.
The most annoying part was that when they turned off my billing, I had been paying for 200GB of GDrive space. Boop, it went to 15GB. Turns out it's shared with gmail, which means I had now exceeded my gmail quota by around 1200%.
But my gmail account kept functioning. They periodically said "You know, you really should purchase more storage..." but the service kept working.
Till Friday. Boop, all my emails shut off. Noticed it on Monday. Had to purge 185GB down to <15GB on my gdrive, and managed to just barely do it.
I dunno. I don't really feel like causing problems. Yeah it sucks, but I also get it, and I'm more of a "go with the flow" kind of person, and "periodically whine about it in a comment." But I'll go submit the damn passport photo and get it over with once our new house is set up. It was more of a mild annoyance than a disaster.
>That said, they can ban distribution of misleading content. It's their right, and no one can dispute that. I'm not sure what people hope to accomplish by expressing outrage over this.
Since when is encouraging mega-corporations to act morally a bad thing? Or some sort of waste-of-breath, as you imply here.
The problem we actually need to solve is how to divorce the concept of free speech from private companies. For some reason, it's almost ubiquitously believed that the "Internet" is actually just Twitter, Google, Facebook.
How can we, the tech literate, start to explain to our friends/family and eventually the world, that the Internet is actually an interconnected network of computers that no one person or entity controls?
Honestly, I fear that even among us, we have large groups of uninformed people who believe companies like Google and Facebook are more than just leaf nodes on the graph, and that worries me. If knowledgable people can be this wrong, how can we expect the uninformed to get this right?
I worry about the future of tech, if the discourse on HN is any indication of how the greater tech community feels. The idea that Google is an integral part of using the Internet to express oneself is not only completely wrong, but actively harmful, and I have no clue what to do about it.
"The Net interprets censorship as damage and routes around it". John Gilmore's statement has seemingly gone out of vogue, but it remains true. Google cannot stop information (good and bad) from spreading, so why do people think they can, and how do we disabuse Americans (and the world) from the notion that Google has that power?
Even if you are tech literate, how do you easily find an alternative to Facebook and Twitter?
The entire benefit of these platforms derives from the fact that they've accumulated so many users. People use these platforms because everyone else they know is on it.
I personally haven't used either for several years but I've had to "sacrifice" my online social presence and miss out on updates from my circle as a result. I put "sacrifice" in quotes because I think the net result is positive and though I miss having some updates, I overall feel quality of life is better without a public online presence.
The problem isn't that one company has made its own decision on what to ban. The problem is that the White House is literally compiling a list and telling all the social media companies who to ban. This is censorship by proxy.
This is not just Google Drive, this is a policy across many of their services (Drive, Docs, Sheets, Slides, Forms, and new Sites). Holy smokes... I think it's time to finally dump Google services for me.
Its a slippery slope but not for the reasons you might first think.
It's because this is turning a platform into an publisher. Soon more and more governments will be demanding publishers to enforce editorial standards. There seems to be no platforms these days that say they are just that. Even cloudfare, a peice of background infrastructure is happily editorialising certain things.
With this idea, try not to get sucked in to the other argument. The issue is not that the list of certain things is going to increase, the issue is that the impartial nature of the internet is disappearing.
There's a fair bit of evidence on some public hearings that the major players were given an ultimatum from both Republicans and Democrats to start being publishers, to relinquish control to the state, or be split up under anti monopoly and anti competition laws.
>It's because this is turning a platform into an publisher.
The "platform vs. publisher" dichotomy that crops up in these conversations is propaganda. Exercising editorial control over content does not create a distinction between one or the other, or convert one into the other. The distinction doesn't exist as a matter of law, legal right or obligation.
Every web site and service, from tiny phpbb forums to FAANG silos, has always had the right to choose what does and does not appear on their site, and the discretion to "editorialize" as they see fit.
And the internet has never been impartial. Individual sites can be run impartially, but that's a choice made by the site owners - not an obligation or legal duty. Other site owners have every right to run their site under other terms. If you don't like the terms under which a service is offered, you can use another service.
"No provider or user of an interactive computer service shall be treated as the PUBLISHER or speaker of any information provided by another information content provider."
So yes. In the actual law, there is something called a publisher, and the law is distinguishing the website from being something as opposed to a publisher.
So yes, there absolutely is a "distinction" being made here, regarding something that the law itself calls a publisher.
My claim was not that publishers didn't exist, but that the commonly presented distinction between platform and publisher - that a "platform" cannot moderate content beyond strict legality, or else they must be be considered a "publisher," lose Section 230 protection and take full legal responsibility for all content on their site - does not exist. Google's editorial policies "turning them from a platform into a publisher" is not a thing that actually happens.
> that a "platform" cannot moderate content beyond strict legality
Nobody in this thread said anything about this being currently illegal. Instead the claim was about a platform acting more like a publisher, in practice.
From a defacto perspective, things that people often call platforms, very much act much different than publishers, in practice.
The person you were responding to is saying that this change, from the previous status quo of platforms acting neutrally, is a problem, but didn't bring up anything to do with the law.
The fact that this stuff is legal, to become less neutral, is in fact precisely the issue!
EX: the following statement "the issue is that the impartial nature of the internet is disappearing"
Is a description of how things work, in practice, that has nothing to do with the law, and instead having to do with how these entities act in practice.
> Google's editorial policies "turning them from a platform into a publisher" is not a thing that actually happens.
Yes it is a thing that is happening. It just has nothing to do with the stuff you brought up. In the past, Google acted differently. It has nothing to do with it being legal to act differently.
The actual, more interesting thing though, is how this will effect Future laws though.
In reality, colloquial called platforms, acting more like colloquial called publishers very much could cause changes in future laws.
>Nobody in this thread said anything about this being currently illegal.
And neither did I. My comment obviously referenced legality in the context of what sort of content "publishers" versus "platforms" are considered able to moderate - strictly legal content versus legal content which offends some otherwise arbitrary guidelines.
This is the second time you've misconstrued my comment, so I'm going to find a better use of my time now. Good day.
You are misconstruing the other person's comment is the point. They were talking about how laws might change, and the consequences of how platforms acting more like what are commonly called publishers might be.
And it is how platforms acting more like publishers, in the colloquial sense, is absolutely a thing, and it has nothing to do with the law.
You called it propaganda, when there is actually a point to be made here.
The old assumptions of the internet and how it works were wrong. The declaration of freedom was naive.
The simple model of how “free speech” leads to wiser outcomes turned out to be wrong.
What the heck - this combination may even be a Great Filter.
1) The rapid deployment of near-species scale information networks.
2) Information losing neutrality because any species (aside from maybe a hive mind) adds spin and polarity to data
3) Unprepared economic markets that end up rewarding polarizing (“engaging”) content, attriting societal level attention/mental resource in a way never experienced before.
I believe the recent British Govt paper argues that social media firms need to bear a duty of care to users. This is a better place to start.
Removing misinformation which has an obvious known fingerprint is a good start.
I read regular stories on HN about employee activism on certain types of popular issues around gender and race, involving petitions and publicity. Spotify, Apple, Google. Yet nothing about the march towards totalitarianism and information control - always with good intentions and to our benefit - promulgated with stories such as this.
Not a peep from our young Silicon Valley activists. Do we still teach history at school? I'm talking about history of many countries. No fear whatsoever of information control and government censorship. I'm baffled and saddened.
Because the goal isn't to deconstruct those systems of power (as it probably _should_), but rather to put someone else in the center.
I once heard a Silicon Valley VC say on a podcast: "If you hear someone utter the term 'equity,' then run for the hills. Because it's really a power grab." (Not obviously talking about stock compensation, of course...)
Not all or even most. Tech platform censorship is just ranked lower on their list of issues/grievances. How do you think activists should be spending their time?
>Social and economic justice are paramount to achieving just outcomes. We will prioritize the needs of the worst off. Neutrality never helps the victim.
That doesn't scream "end platform censorship" to me.
The ideology of young activists basically boils down to power. You either have it, or you don't. Foucault in a nutshell. Abstract principles that should apply equally to everyone are just a colonialist legacy, or something.
>Do we still teach history at school? I'm talking about history of many countries. No fear whatsoever of information control and government censorship. I'm baffled and saddened.
I agree. We don't spend enough time teaching about the history of free speech in other countries like Germany. And no, I don't only mean in 1930s and 1940s Germany. I also mean the Germany of today. They, along with much of Europe, have laws outlawing some misinformation such as Holocaust denial and their societies haven't collapsed into totalitarianism.
Some restrictions on speech are truly dangerous. Some aren't. It is important to have the historical context to help know which one we are discussing.
It looks like there's a lot more nuanced in there than criticising Islam. Trying to convince people that all Muslims are pedophiles is a bit different than calling Mohammad a pedophile
You are equating criticizing a religion with spreading hate speech. Islam or any religion is not "off the table in terms of discussion". Many European countries, including Germany, simply don't want people crossing the lines into speech that can incite people to harm others.
> simply don't want people crossing the lines into speech that can incite people to harm others.
I think that's very very far from simple, since it necessarily requires the government slowly align, and converge, with those people, so they they are never offended and never cause harm.
> since it necessarily requires the government slowly align, and converge, with those people, so they they are never offended and never cause harm.
"Those people" are not the ones who dictate whether something is hate speech. Whether someone is offended is not a factor in hate speech laws. When I said "incite people to harm others" I am talking about physical harm or violence. It is perfectly legal to offend people in Germany.
General offense, usually only prosecuted if the offended wants it (but the prosecutor has discretion to proceed without). Harsher punishments if the offended is a politician.
I don't know what your point is here. You are citing particular laws in which a party is likely to be offended like public sex, defamation, and desecration of a flag. But the crime isn't that offense was caused. That offense is the byproduct of the actual crime.
Plus many of those are illegal in other countries too. You can't have public sex or defame people in the US either, but no one would say it is illegal to offend people.
FYI Austria and Germany haven't been a unified country in three quarters of a century.
My original comment was about laws against misinformation in Germany. That doesn't mean I endorse or need to defend all free speech laws in all of Europe.
Who judges what's disinformation? In the case of holocaust denial, you have historians who can easily disprove the deniers. What the hell is Google's claim to credentials?
In terms of harm measurement, it's not as straightforward as you would like. Are we judging feelings or violence that results or...?
Next, end goal. You want to squash every individual who denies the Holocaust occurred or downplays it? Good luck, you'll have your hands full in countries like Pakistan and Saudi Arabia and Iran.
I don't want to be a jerk, but if you want to play devil's advocate, you can do the research yourself. I am comfortable believing the old "those who don't know history are doomed to repeat it" adage without having a peer reviewed study on it.
I don’t take it as being a jerk, I’m just trying to question my assumptions in being intellectually rigorous. I’m failing to see how denying an event leads to repeating it, or the inverse, how accepting a historical event shields you from repeating it.
Genocide existed before hitler, and I’m assuming hitler knew about some of it, so why didn’t it work there?
But the Turkish government and its supporters are allowed to deny and downplay the Armenian genocide to their hearts content. The European Court of Human Rights even decided (Perinçek v. Switzerland) that people have a free speech right to deny the Armenian genocide, yet it has also held that people don't have a free speech right to deny the Holocaust (Pastörs v. Germany).
Both genocides happened, both genocides were awful, but it seems like different rules apply to denying different genocides, and that those rules are based on political calculations rather than defensible principle.
Yes, the laws in Turkey, Switzerland, and Germany are not going to be identical. And as I said elsewhere in this thread:
"My original comment was about laws against misinformation in Germany. That doesn't mean I endorse or need to defend all free speech laws in all of Europe."
Germany passes laws to ban Holocaust denial. The European Court of Human Rights upholds them as compatible with human rights.
Switzerland passes laws to ban both Holocaust denial and Armenian Genocide denial. The European Court of Human Rights rules the second aspect of the law is a human rights violation.
My complaint isn't about either the laws of Germany or of Switzerland, it is about the rulings of the European Court of Human Rights, who have jurisdiction across almost all of Europe. The ECHR believes one has a free speech right to deny some genocides but not others.
The European Court of Human Rights doesn't rule on abstract ideas. It rules on particular cases. They didn't say denying the Armenian Genocide is okay while denying the Holocaust isn't. They ruled on two individual cases that seem to be contradictory when summarized in a single sentence. That doesn't mean they truly are contradictory once examined in depth.
You haven't explained how they aren't contradictory. You've just claimed that, if one examines them in depth, they will be found not to be contradictory. Have you examined them in depth yourself?
I'm not sure why the burden is on me, but here is the first Google result when plugging in both case names[1]:
>The Court began by noting the decisions in its previous cases concerning the denial of the Holocaust and other statements relating to Nazi crimes. Previous statements have been held to be inadmissible, either as being ill-founded under Article 10(2) when read with Article 17 (citing Williamson v. Germany ECtHR [2019] 64496/17), or otherwise as having incompatible subject matter jurisdiction under Article 17 (Perinçek v. Switzerland ECtHR [2015] 27510/08). The Court reiterated that Article 17 is reserved for extreme cases and on exceptional bases, and should only be used in Article 10 cases if it is “immediately clear” that the statements in issue seek to employ the right to freedom of expression “for ends clearly contrary to the values of the Convention” (para 37) – such as inciting hatred or violence or aiming to destroy the rights and freedoms listed in the Convention. Any case concerning Holocaust denial must be taken on a case-by-case basis and depends on all the circumstances of that particular case, when it comes to the Court’s decision to either apply Article 17 directly to the applicant’s Article 10 application and declare it incompatible with the subject matter jurisdiction of the Court, or instead to apply the general law principles under Article 10 and invoke Article 17 at a later stage as an interpretive aid when examining the necessity of any alleged interference by the domestic courts.
So, quoting a court's justification of its decision, proves the court made the right decision?
Of course the ECHR is going to claim "this case is unique and isn't inconsistent with our rulings in other cases". Just because they claim that doesn't make it true.
>Do you agree there is a reasonable possibility the claim is true?
Sure, it is reasonable it is politically motivated. It is also reasonable it isn't. Giving the lack of definitive answer, I chose to give the benefit of the doubt to the ECHR. Either way, I don't understand what point you are trying to make.
Well, my point is this – if you try to justify laws against genocide denial, it is easier to justify them if those laws are consistent, and based on principle rather than politics. If there is a real possibility that we have unprincipled inconsistencies in them, that is a problem for their justification.
Why is the justification impacted by the consistency of certain implementations? Speed limits aren't consistent. You can't go above 60 mph anywhere in Hawaii, but some places in Texas you can go 85 mph. That doesn't mean speed limits aren't justifiable.
Laws are always going to vary from jurisdiction to jurisdiction. Sometimes that difference is based on principle. Sometimes it is based on politics. But even if it is tweaked due to politics, it doesn't invalidate the principle behind the entire law.
When laws treat members of different racial/ethnic/national groups differently, that's a form of discrimination. If there was a genocide against group A, and another against group B, and we say that it is illegal to deny the genocide against group A, but legal to deny the genocide against group B, that is treating group A and B differently. It is going to upset many members of group B, and damage social cohesion. Especially so if the different legal treatment is based on politics rather than principle.
I don't think your case of speed limits is really comparable. Nobody's personal identity is based on a speed limit, those laws – however rational or irrational they may be – are not relevant to personal identity. But genocide denial laws are relevant to personal identity – to racial/ethnic/national/etc identity – since different genocides have different victim groups, and there are people alive today who belong to those victim groups and identify with them, and the victims of a genocide (and their descendants) have a special interest in whether it is legal to deny that genocide.
>If there was a genocide against group A, and another against group B, and we say that it is illegal to deny the genocide against group A, but legal to deny the genocide against group B, that is treating group A and B differently.
That was not the ruling of the court. You are projecting that as the result of the court's decisions, but the court expressly said this was not what their ruling meant. Then you are assuming a motive based off that projection.
You are effectively pointing to one court case in which a woman was found innocent of murder and one in which a man was found guilty of murder and concluding that the system is obviously gender biased against men. My argument is not even necessarily that you are wrong, my argument is that you can't draw this conclusion from these two cases.
Well, I am not the only person who draws the conclusion that the ECHR judgement sets up a morally dubious hierarchy of genocides, in which the memory of the Holocaust is viewed as more worthy of protection than the memory of the Armenian genocide, and in which the threat of antisemitism is taken more seriously than that of anti-Armenianism. Here is a law professor making the exact same criticisms (Professor Uladzislau Belavusau at the University of Amsterdam): https://www.echrblog.com/2015/11/guest-commentary-on-grand-c... and also https://verfassungsblog.de/armenian-genocide-v-holocaust-in-...
Similar criticisms have been voiced by Ariana Macaya (law professor at the University of Costa Rica): http://www.qil-qdi.org/focus-sur-perincek-c-suisse-la-questi... (she writes in French, but if you can't read French, Google Translate does a pretty good job of it)
It is odd to accuse me of "projecting" in my criticism of a court decision, when law professors express essentially the same criticism. Are they projecting too?
I can point to legal experts who have a different conclusion.[1] Your linked experts are not inherently more right than mine.
Would you acknowledge that there is something more incendiary about a member of the German Parliament denying the Holocaust in an official Parliamentary speech compared to a foreign national denying the Armenian genocide in Switzerland where the Armenian population is estimated to be 3k-6k? That is the primary conclusion of the ECHR. There is no justification for extrapolating that out to the idea that every instance of denying the Holocaust is worse than any instance of denying the Armenian Genocide which appears to be your conclusion.
> Laws are always going to vary from jurisdiction to jurisdiction.
GP was talking about rulings from the same jurisdiction.
The crazy amount of mental gymnastics you’re doing to justify denial of a genocide because it’s not your pet genocide wild.
That isn't how the ECHR works. They do not dictate the laws for all of Europe. They evaluate whether the individual governments are violating someone's human rights. Those countries can obviously still have their own laws on free speech that will vary.
>The crazy amount of mental gymnastics you’re doing to justify denial of a genocide because it’s not your pet genocide wild.
Where did I justify the denial of genocide? This is a wild accusation, especially from someone who wanted evidence that denying genocide was bad.
What’s sad to me is that someone who really should be a hero to everyone in Silicon Valley, the journalist who decided to publish Edward Snowden’s stuff, at insane risk to himself, has been warning about censorship forever. (Glen greenwald)
Yet now he’s apparently a right wing trump apologist to most people on the left.
That doesn’t change the truth of what a person who’s put a lot on the table is saying.
Censorship always becomes about power. Once you create the tools, the powerful will take them over. You may think that’s a good thing when your side is in power, but that will NEVER be forever.
Snowden (and countless others) are hero's. This new breed of authoritarian leftists do not represent Silicon Valley and tech at large, but rather silence those of us who do still believe in free speech.
I'm petrified about speaking out because I don't want to be labeled as a far-right trump apologist. No, believing gigantic megacorporations shouldn't censor information is not "right-wing".
Please, please do speak out! Otherwise, the only people speaking are the authoritarian leftists and their worst possible opponents, the Trump apologists. People speaking in favor of speech they do not agree with are the most compelling free-speech advocates of all.
Soooo… The Who said the virus isn’t airborne initially. They said masks don’t work initially.
What’s your plan of action there? Google censors what the status quo asks them to and then flips and censors the other side the second they change their mind? Does that sound like a good world to you?
How exactly does long term discussion happen in that cases? Since basically by about April 2020 you’ve banned all pro mask discussion and anti mask discussion…
Well we could create a new government organization that determines what the current best known truth is ("Department of Truth" say) and Google censors just remove anything that goes against the Department of Truth
Does Google need a government agency to determine what emails are or aren't spam? Why would it need that in this case instead, and do you really think that would be an improvement? This is a bad strawman.
If they're bad at identifying misinformation, then I'll fault them for that. Has Google had a tendency to censor things on the basis of information that later turned out to be true? Your examples are the judgments of groups that aren't Google and judgments that I don't think the groups responsible for pushed for others to ever get censored for.
In the early days of the coronavirus, you couldn't use Google to find information at all because everyone other than the WHO was getting censored. I was using Bing for a while because their censorship was much slower.
But that is precisely the lesson history taught us. Misinformation, fundamentally, cannot be identified. It's easy to say "oh, everyone who believed that was an idiot" when talking about Galileo being thrown in prison for heliocentrism. If he was alive today, we'd be calling him a "far-right conspiracy theorist" or something equally as nasty.
I can't understate how much I disagree with that being an unambiguous lesson of history. There's been plenty of places where misinformation was identified and pushed out in favor of better information. But mainly, this is about an individual company deciding whether or not to participate in helping spread (mis)information, not about the government choosing to jail someone. It's terrible that Galileo was jailed and that should not have happened, but I don't think free speech should mean that any specific newspaper was legally obligated to run any articles he wrote.
We shouldn't be trusting (and giving power to) central governing bodies to dictate what counts as a "lie," and scrubbing away inconvenient information.
"I disapprove of what you say, but I will defend to the death your right to say it" - Voltaire
Once you put the structures in place you don’t get to choose what gets censored. You’re arming a terrible weapon that will already be used against you on the pretext that it will only be used for these two things - and it won’t be only used for those things.
If there's one thing the history of authoritarianism has taught us, it's that the invasion of privacy will not be contained. Invasion of Privacy breaks free, it expands to new territories, and crashes through barriers painfully, maybe even dangerously, but, uh, well, there it is. ...
Authoritarianism will find away.
- Ian Malcolm after they raided his house for publishing the memoires of his stay at Jurassic Park to Google Docs
I highly reccomend people look at Cryptomator[1]. It encrypts all your data on your google drive or dropbox etc, but works like normally on your local machine. Allows you to use these services while blocking the provider from being able to view your unencrypted data. Also its free and open source!
I keep hoping to see a fully end-to-end encrypted, peer-to-peer, uncensorable communications channel catch on... but if it hasn't caught on yet, I don't think it's ever going to. Too many people believe that "if you don't have anything to hide you don't have anything to fear" - so they figure if it looks like they're hiding something, they should have reason to fear.
For better or for worse: the cloud was always somebody else's computer.
Honestly I think it's pretty silly that we've all become so complacent with the cloud that we pretend that we're not guests, and that we own the place.
In 2003 the US Government and media collaborated to start the Iraq War based on false reporting. Tens of thousands of people died. Today, the notion that Saddam Hussein had no weapons of mass destruction would be suppressed as "misinformation" because it contradicts the official narrative.
Amazing they have the gall to think they still can arbiter truth, considering how YouTube up until recently was banning the "conspiracy theory" that the recent pandemic may have originated from a Wuhan Lab... Something that now all the experts are saying may actually be true.
You don't have to be an arbiter of all truth to flag some statements as dangerously untrue. Just because truth is sometimes hard to determine doesn't mean that we have to give up on there being an objective reality
"Dangerously untrue" is entirely subjective. NSA director testified that Snowden revelations were untrue. Snowden revelations cost US credibility, and probably has hampered our ability to "protect the globe". Does that make Snowden's revelations dangerously untrue?
Is there a specific legal test you could propose to distinguish actually 'dangerously untrue' information from info which just threatens existing power structures?
I am so tired of seeing this crap. Here is what happened:
Act I: there was no solid evidence as to where COVID originated. There was speculation of various kinds ranging from transmission from a bat cave via wet markets, to lab leak at a Wuhan lab, to it being a bio weapon. Nobody had solid evidence though people started digging.
Act II: a group of pundits and conservative political operatives who were already known to be liars and blowhards latched onto the lab leak theory. It just so happened that it was politically advantageous for the GOP to push anti-Chinese sentiment at the time and a lab leak theory would point the finger directly at the Chinese government. They presented no evidence and had no evidence. They started spreading this theory along with various online groups all known for spreading disinformation. The clear undertone of this message was a call for violence against Asian Americans.
Act III: in reaction to the calls for violence tech companies started curbing spread of this message. Remember some of the other things these same people were saying: that COVID was fake and invented by the US government to keep us indoors while they installed 5G towers everywhere to control our minds; that staying home and wearing masks was designed to weaken our immune systems to prepare for some kind of bio weapon attack; that Bill Gates was using vaccines to implant trackers in every arm of every individual around the world; that the mRNA vaccines are designed to let Pfizer and Moderna copyright or trademark your DNA such that they own you and you become their slave; that nobody actually was dying from COVID and this was a massive coverup designed to make Trump look bad.
Act IV: evidence had emerged that the lab leak theory might have credibility. The tech companies lifted the filtering efforts.
---
Note that this is in no way different than if I tell you that lizard people from Mars run the US government and you tell me to shut the fuck up. I might be right and maybe evidence later shows up that in fact lizard people form Mars do run the US government, but since I have no evidence at the moment this is just bullshit at best and a call for insurrection at worst. Even a broken clock it right twice a day and when liars say something there is less than neutral reason to trust what they say. When mostly liars are frothing at the mouth spreading a theory, well it sure walks, talks, and sounds like a conspiracy theory. Had the people spreading the lab leak theory initially taken the time to do proper research and present any kind of shred of evidence then maybe it would have been treated differently. But as is they had no credibility to begin with, so is it that surprising that what they had to say was treated as lies, especially given their clear conflict of interest?
And yes I am aware of individual incidents of various investigators being hampered by their higher ups from looking into the lab leak theory. I read those stories in nuance and what I gleamed is that it was (a) partly incompetence and (b) partly reactionary to the bullshit that the talking heads on TV were spreading. Maybe what we should focus on is holding the talking heads on TV to some standard of reality and these things won't happen instead of crying "censorship!" when someone calls for violence against an ethnic group based on unsubstantiated (at least at the time and still currently not proven) theory.
Act II is grossly misrepresented. It wasn't just the GOP and a lot of evidence was presented. In fact, there hasn't been a whole lot of new evidence presented since then.
Saying "all the experts" think it may actually be true is a big stretch. It's far more accurate to say that the experts haven't ruled it out. But they still think natural origin is far more likely[1].
He should not have used the word all, but this feels like a really pedantic point to me. The main thing is that there is no consensus that it didn't get created that way.
The post said "all the experts are saying may actually be true". The "may" (not "is"!) there is pretty key in making the meaning a lot closer to your (B) than anything like your (A)... Did that comment get edited after yours? Because it seems like you are arguing against a strawman, not what was actually said.
But that is not the difference here. The original post said "Something [meaning the lab leak theory] all the experts are saying may actually be true". This is somewhere in between your A and B and at least to me sounds closer to B than A, as "may actually be true" is more of an acknowledgment of possibility than an assertion of probability. It's definitely not a statement of certainty.
But the statement in question was not "all experts agree that covid comes from a wuhan lab", it is "all experts agree that covid could come from a wuhan lab". That's still unlikely to be 100% accurate, but your version A is a complete misrepresentation.
Still. There is a difference between saying something hasn't been ruled out and saying that it may be true. To me, "may be true" implies a decent (> 10% chance) that it is true. My impression is that most experts do not think it's true. They just don't have enough evidence to definitively rule it out.
We are talking about a one-off event that happened in a totalitarian country with a huge penchant for purging undesirable information.
Even experts are on a shaky ground when almost all primary information sources are controlled by a non-cooperative party. The only thing concerning the origin of the epidemics that is beyond reasonable doubt is the genetic sequence of the virus. We do not even know for sure who the first covid-19 patient really was and when.
Of course it is hard to speak with confidence in such a situation.
> I spoke to Peter Daszak, president of the EcoHealth Alliance
This guy became a little controversial [1] after a FOIA request turned up emails in which he appears to be conspiring with colleagues to manipulate public perception:
> “you, me and him should not sign this statement, so it has some distance from us and therefore doesn’t work in a counterproductive way.” Daszak added, “We’ll then put it out in a way that doesn’t link it back to our collaboration so we maximize an independent voice.”
> Baric agreed, writing back, “Otherwise it looks self-serving and we lose impact.”
"Debunk" is becoming quite the loaded term. For one thing it's being used definitively despite the fact that there are still ongoing debates. The other thing is that it implies a sort of finality. New evidence or the results of an investigation can pop up anytime.
Seems like the inevitable result is decision fatigue, where people decide, in the face of 2 irreconcilably polarized viewpoints that it's not worth the emotional stress of all the information overload to try to understand which is correct and simply give up.
A few months ago, it was forbidden to think about. Now it’s considered a real possibility by the mainstream. Next year, it will probably be considered irrefutable.
Gain of function isn't a conspiracy. It's research methodology.
We still don't know for a fact what happened. Lab leak seems incredibly plausible, but we need more data to understand.
If lab leak is what happened, then this is likely a case of best intentions that went horribly awry. Your response is far too extreme and discounts the failure modes that may have been more likely.
Researchers in the US wanted to conduct gain of function research but couldn't due to the legislative environment.
China has a ton of novel coronaviruses in local wildlife reservoirs that do cause disease and can evolve to impact humans. These are worth studying.
An arrangement could have been made to study these viruses with research objectives laid out by Western scientists. That's not bioweapons research. That's basic science.
The Wuhan lab may not have been equipped with the same safety protocols, enabling the virus to escape. Here's something we'd still need to find out.
What we need to look at is the cause of failure and prevent it from happening again. If rules were broken, then a handful of individuals may be responsible for unleashing this. (Overzealous Western researchers and Chinese lab personnel.)
This is geopolitically complicated and all parties involved are trying to save face. China and the US included. Not to mention every government that was slow to act in stopping the spread.
This is maybe human error. Lots of human error. If this is the case, it's historically notable as probably one of the greatest mistakes in human history. It cost millions of lives.
Through the same lens, it's also interesting to see all of the positive changes. RNA vaccines, remote work, supply chain discoveries, etc.
It’s possible that there were great intentions all around and the COVID leak was a horrible accident.
It’s also possible that a PRC agency or individual decided to take advantage of the situation and leak it to try and stick it to the United States as a geopolitical move.
I hope this isn’t true, but until we have additional evidence it would be impossible to rule this out entirely. Also the PRC needs to show that evidence to the world ASAP.
For some reason the media wants to call everyone that looks at the situation xenophobic and racist, which just leads me to believe there is more to the story.
Why not study the Human Immune System and how to reinforce it via diet. Like how they studied bone loss decades ago and decided to add Vitamin D to milk to increase calcium uptake. Seems to me there are millions of potential pathogens that could be problematic for humans, increasing defenses seems to be the rational approach, if one’s intent is to actually preserve human life.
Why is conspiracy theory in quotes? It was a conspiracy theory, peddled by conspiracy theorists, and laundered into the public discourse by conservative media outlets. One of many dozen such claims about COVID-19. A broken clock is right twice a day (in this case the clock is probably still wrong).
Why pretend that there hasn't been loads of fake harmful covid misinformation over the last year? The conspiracy theorists have not been vindicated, or are you taking hydroxychloroquine?
Youtube is still populated by actual conspiracy Qnuts. Much of the YouTube active discussion is right leaning, look to the comment sections.
Google isn't trying to arbiter truth, it's a combination of doing what China tells them to do and not being the arbiter of truth by removing almost anything related to covid. Simply discussing it can get you demonitized or your video removed.
> Something that now all the experts are saying may actually be true
Since this is blatantly false, I would ban this as misinformation if it happened in a place I moderated. I'm don't care if it's about China, I care about not enabling misinformation. If people want conspiracy theories, there are friendly forums they could visit, but they want them in serious forums too.
WHO Chief, yesterday, admits ruling out the lab leak theory was premature. It's not blatantly false unless the WHO Chief is propping up conspiracy theories now.
I agree it's among the possibilities, but one should be honest about their likelihood. Right now the consensus is that natural occurrence is the most likely explanation, but there is not solid evidence to discard a lab leak.
They don't have solid evidence to disprove a lab leak. Disproving is a lot harder that proving.
I must concede that your statement wasn't blatantly false, but it uses weasel language to appear more truthful than it is.
>...but one should be honest about their likelihood.
I agree, but I don't trust any central authority (especially the government!) to do that for me.
Educating people & empowering them to make this call is a much more ethical, and fruitful, endeavor.
To loosely paraphrase Mark Twain (I believe?): Just because a toddler can't use a sharp knife doesn't mean that I have to use a butter knife to cut my steak.
> To loosely paraphrase Mark Twain (I believe?): Just because a toddler can't use a sharp knife doesn't mean that I have to use a butter knife to cut my steak.
That's a great quote, only we're the toddler. For all its political failings, the WHO is brutally more knowledgeable about the subject that us, and less biased than our local governments. They're asking China to open up so they can discard/prove a lab leak. If China refuses, the lab leak possibility remains on the table no matter its likelihood.
Weaseling the phrasing, one can give the impression that the WHO turned around and now thinks the lab leak is the most likely source, while they still consider the natural origin most likely.
Again, it can be both, (and IMO chances are it's both). It could have been a sample from nature that leaked out of the lab.
There are no signs that it might be man made. There are no signs that it might have evolved in GoF research. But the WIV had a sample of a close ancestor of this virus, it's not crazy that someone got the bug and jumped out in that very same city.
I haven't read from them, but as a general thing, GoF research is very directional, so in practice the experiments will pressure a virus to gain a function, but the rest of the functions are usually impacted to a large extent due to the lack of evolutionary pressure.
So you may do selection on virions that target better certain receptors in certain human cells to infect them, and that's useful to know of possible evolutions of a wild virus, but in parallel it might be losing environmental resistance (temperature range, UV light), or maybe damage the expression of some vital protein, or become too pathogenic and die along with the host.
By all means, one could try to perform GoF in live humans to ensure there's no LoF, but that limits enourmously the speed of the research, plus it's usually forbidden to experiment in humans these days.
Alexander Lukashenko received 80% of votes and anybody who claims otherwise should have their Gmail account, YouTube videos, Play Store apps, etc. permanently and irreversibly revoked.
Didn't Google refuse to work with US gov, but had no problem helping Chinese colleges and institutions which are directly controlled by or tied heavily to the CCP?
Surprised nobody has mentioned ProtonDrive[0] as a near future candidate for eliminating these types of incidents. Since the data is encrypted, ProtonDrive can't make arbitrary value judgements on your data. For me personally I can't wait for it to arrive so I can wean myself off Google properly. It's the one final thing that I need to replace Google with: Proper Encrypted Cloud Storage.
I have been holding out for protondrive for a few years myself. It’s going to take them time to build out more apps for native integration. I wonder how the performance will be with the focus so heavily on keeping data in Switzerland.
In the meantime I tried out Tresorit which is end to end encrypted and which worked fine for several years but they were acquired by Swiss post whose intentions I don’t understand.
I’m now working on closing Tresorit and adopting boxcryptor. Price is really nice and it can end to end encrypt files in iCloud Drive which proves to be fast for me wherever I use it around the world.
My guess is that a lot of this is driven by the climate in the USA, so with that in mind I have some examples that I am curious about, as an outside observer.
> It also includes incorrect claims that a political figure or government official has died, been involved in an accident, or is suffering from a sudden serious illness.
1) What is the ruling if I store and share a CNN video of a Don Lemon show in which he and a mental health professional discusses her diagnosis that a political figure, whom she has never met, is mentally ill.
2) Same thing but on Fox News and different political figure
3) Same thing but on Infowars but about Hillary Clinton
> Content that is demonstrably false and could significantly undermine participation or trust in civic or democratic processes.
and
> This includes information about public voting procedures, political candidate eligibility based on age / birthplace, election results,
1) Claims in the media and by political figures that the gubernatorial election in the state of Georgia was fraudulent
2) Claims that a Presidential election was "hacked" by a foreign state
3) Claims that a Presidential election was fraudulent
> Manipulated media: Media that has been technically manipulated or doctored in a way that misleads users and may pose a serious risk of egregious harm.
Again here, dealing in pure objective truth versus media spin and misleading framing:
1) Out-of-context edits suggesting that political figure called neo-nazis "very fine people"
2) A video, absent context, of a fictional reading of a phone call transcript, during an impeachment
3) A statement by a political figure that the COVID-19 virus may have leaked from a laboratory
I will be very surprised if the enforcement does not fall on partisan political lines.
No one is talking about this but no matter what side you’re on, it’s terrifying that a group of individuals congregated in Menlo Park, CA and to a lesser extent few other offices around the world has become the arbitrator of the world’s information. Even if that group is benevolent and have good intentions. It’s principally wrong. These groups of people have no dissent because it gets silenced internally in Google. It’s like an unstoppable unchecked steamroller of ideas that make the world dance at their tune.
I also do not want FAANGs to spread their work culture, their shitty UI frameworks, their interview process, their politics, their methodology and engineering libraries, their philosophical stance, their morality and ultimately their influence over the rest of the world.
I love most of their products but I will not use them because of their overpowering influence. People in Australia have to deal with shitty SV culture because of these companies. It’s cancerous. It’s dangerous and it’s going to stifle freedom of information, ultimately freedom of political expression and opposition. They’re acting as government proxy but inability to vote them off. Makes me angry just thinking about it. Fuck this.
As a government censor it is very pleasing to see commercial companies go above and beyond like this. In the old days I would have to threaten legal action before they instituted preemptive policies. Now all I have to do is wait for an internal group to raise an inclusion/exclusion/psychic violence issue and they'll grant themselves all sorts of broad policing authority. Saves me a whole lot of trouble I can tell you that much!
Absolutely zero dissent internally at Google. It's a bunch of employees that have exposure to Menlo Park, CA and not much else. Never seen hardship and never learned history, but they get to dictate how the world consumes information.
This is like watching someone (tech companies) wrap a rope around his neck while standing on a ladder. HEY HOLD MY BEER! Its all fun and games when an administration "in charge" has your back.
It's going to be 100% different the day that changes. It's almost like people forget the pendulum of leadership in the US has a well known tendency to swing.
Don't get me wrong, in principal I can get behind the idea of this but at the same time I don't think I'd feel comfortable with any actual implementation of the idea let alone the people in charge of the implementation. In my own personal view I feel like the harm done by people publishing stupid bullshit online is vastly outweighed by what amounts to government backed methods to cull publishing stupid bullshit online.
This cure is worse than the disease in my opinion.
The things that you say and write now are very likely going to be put under a microscope in the future and be found wanting.
And you will have to choose between lying, keeping your mouth shut, and having your entire life (digital and personal) torn apart for not agreeing with whomever is deciding what's true that day.
If you want to have freedom of speech you have to have the freedom to say things that some people think are lies. And let other people do the same.
There are laws to deal with edge cases that are seriously damaging, like libel. Which, unsurprisingly, is quite hard to prosecute in the US due to first amendment protections.
Google banning illegal content is entirely reasonable. Kicking out users who are disruptive to the platform itself is also justifiable. This is not that.
Don't like Google Drive's policy? Use a different hosting service, or run your own. Seriously. It's a free country and a free market, and if they don't want to do business with you, they don't have to (and it's a two-way street). Deal with it.
Is Google honestly acting following economic interests? Is it to avoid liability from hosting damaging content? Is it acting to protect us?
Google, which isn't event the most valuable company in the world, is worth above all but - give or take - the top 36 countries in the world.
A company of this size wields immense power while simultaneously remaining unhampered by all the dead weight governments have to haul around to get anything done.
For many people - when they already have a lot of money - it becomes boring. That thing billions spend their daily lives chasing after is boring for those who have so much of it they don't know what to do with it. For them, what gets them out of bed in the morning is the prospect of using power in fun or interesting ways. In this case, perhaps the power to change the world.
It's an interesting form of curation, as if something has been censored, now that's something I'm interested in knowing about. Dangerous means powerful, and powerful is valuable. Most of everything is crap, but what's not crap is stuff that genuinely puts dominant narratives at risk.
I can't remember the name of it, but there is a feature of change in systems that has to do with complexity, where the change happens rapidly as the result of explosive exponential growth, and not relatively linear increments (period doubling?). In that view, something being censored is a good leading indicator that it's a candidate for sudden growth.
Their definition of misleading says it "includes information [...] that contradicts official government records". Because no government record has ever been wrong before, right?
Early on Fauci was telling everyone how useless masks were [1].
I have a couple people in my social circle that were banned
from fb for saying he was wrong early on about masks being ineffective. And several others were banned later for pointing out fauci's earlier stance and calling it all propaganda. You are not allowed to think for yourself. Pick up that can.
"We have recently been notified of a potential policy violation and after a thorough review of the video materials uploaded, it has been determined that the content is misleading and contradicts official government records. As a result of this decision, you access terminating to Youtube and other Google products, including Gmail, has been terminated. The decisions is final. This message is auto-generated"
“The past was alterable. The past never had been altered. Oceania was at war with Eastasia. Oceania had always been at war with Eastasia.”
N95 masks were never useless and Fauci should be seen as a monster.
What I don't understand is why there were no nations giving away n95 masks and education on how to wear it. It seems like we hivemind to the unsustainable lockdowns.
I don't understand why most functional governments didn't eventually have a monthly care package of "here's your masks, hand sanitizer, the latest newsletter, and your stimulus check".
This April they're mostly useless in the US because you can just get vaccinated. If you're refusing to get vaccinated, you're probably refusing to wear a mask too.
There's a lot more to the world than the US, and that's really missing the point. "We're out of masks" is a perfectly good answer to "why aren't governments giving them out" in March 2020; it's not a good answer a few months later.
> I don't understand why most functional governments didn't eventually have a monthly care package of "here's your masks, hand sanitizer, the latest newsletter, and your stimulus check".
Partly because it isn’t the role of government to “give” you things you might want while they are paid for with foreign debt in box.
Last year in Germany you could get IIRC 7 free N95/FFP2 masks at the local pharmacy for free. The pharmacists were supposed to show you how to wear them.
To be fair he never used the word "useless". That's the title submitted by the person who uploaded that. There could be a bit of confirmation bias here. It's as easy for someone with little trust in their government to point to this and say officials are flip-flopping on the information released, as it is for others to interpret it as Fauci saying "don't expect a mask to fully protect you".
They are NOT effective in preventing general public from catching #Coronavirus, but if healthcare providers can’t get them to care for sick patients, it puts them and our communities at risk!"
A. Which government? All governments in general collectively? Or just the governments you want to believe? I'm sure they aren't going to listen to the COVID skeptic President of Brazil, right?
Even better, as a user, can I appeal to the authority of a different government than the one I live in? Let's say I appeal to the authority of Brazil as governing my content even though I live in the US. How does that work?
B. Because no government has ever lied on official records when there is a disaster. For sure. And no government is currently, right now, lying on their records to save face. For sure.
Well, we only have to go back a year, and you are saying the Trump admin is the only one speaking truth…
I find it insane that Silicon Valley companies have such short memories, and can’t even comprehend that 1 year ago the same policies would have resulted in banning anything Trump disagreed with.
> Misleading content related to civic and democratic processes: Content that is demonstrably false and could significantly undermine participation or trust in civic or democratic processes. This includes information about public voting procedures, political candidate eligibility based on age / birthplace, election results, or census participation that contradicts official government records. It also includes incorrect claims that a political figure or government official has died, been involved in an accident, or is suffering from a sudden serious illness.
I was shocked as you until I read the actual text.
Banned misinformation "includes _census participation_ that contradicts official government records."
Census participation. For example, "You didn't register in the census last year? You're ineligible to vote." It's aimed at misinformation that discourages people from voting.
That seems much more reasonable, and far more precise, than information that contradicts any government record.
In my opinion you've removed very important context from this quote.
As mentioned elsewhere, the full quote is very different:
“information about public voting procedures, political candidate eligibility based on age / birthplace, election results, or census participation that contradicts official government records”
Where legally required, they have to enforce it for a government they disagree with if they want to continue to do business in that country. Otherwise their employees could face arrest (that's happening in India, for example).
"The point is obvious. There is more than one way to burn a book. And the world is full of people running about with lit matches. Every minority, be it Baptist/Unitarian, Irish/Italian/Octogenarian/Zen Buddhist, Zionist/Seventh-day Adventist, Women’s Lib/Republican, Mattachine/Four Square Gospel feels it has the will, the right, the duty to douse the kerosene, light the fuse. Every dimwit editor who sees himself as the source of all dreary blanc-mange plain-porridge unleavened literature licks his guillotine and eyes the neck of any author who dares to speak above a whisper or write above a nursery rhyme."
Edit: we've had to ask you this kind of thing many times before. If you keep it up we will have to ban you. Also, would you please stop posting unsubstantive comments generally? You've been doing it a lot, unfortunately, and it's not what this site is supposed to be for.
Are you writing in Chinese? No, you weren't. So the question is, what is the intent here? Would you have used the same word if, for example, the context was Brazil? or Russia?
I think a little more self-reflection, rather than defensiveness is in order. Especially because the comment was intended to be helpful, and not a criticism. The original comment was getting downvoted, and I suggested what the reason might be.
I would have. English is a frankenstein language, adopting words from all sorts of different languages. Kowtow means what it means in English, it's a perfectly reasonable word to have used there.
The accepted answer reads, "In light of all this, the fuller answer is that because the term is recognisably ethnic, using kowtow instead of (say) grovel does carry some racist overtones, particularly when applied to someone interacting with a person of a different ethnicity."
I call the Russian special forces Spetznaz and the Brazilian yearly festival Car-ni-val. Or when I am happy when bad happens to other people, I have schadenfreude. Did you also know most of English is comprised of loan words from other languages too? Or are you offended too that words like etiquette are actually french?
So yes. I would use a word that the said culture would say themselves.
Yeah, well that's fine and all. But you're also not exactly talking about the ancient Chinese practice of submitting to the Emperor either. There are no Chinese emperors, and kowtow is not a contemporary custom. So, no, you aren't doing what you are saying you are doing. You are using the word to have the meaning it has acquired in Europe and America.
So why is it that when we talk about Western relationships with China the word kowtow comes up so often? What does it add? Why does it suggest itself? Maybe wrestle with that for a while.
Would you please not perpetuate flamewars on HN? I understand how the same word can land differently with different people. To the extent that you're making a factual argument, it's fine to make the point.
Unfortunately you also started a flamewar with loaded language and since then have crossed into personal attack and blaming others for what is clearly a co-creation. It's definitely not easy to avoid mixing these things but if we're to have thoughtful discussion, we all need to try.
That's a google search. I just filter out definitions, references, and the clothing brand. Now look at it. Sure there are examples of the word kowtow used for other things. But its mostly about China. Because in the context of China, the word just suggests itself. But why? What does it add? What does it subtract. No one here wants to wrestle with that. Instead they get defensive.
They will support this as long as the government in place is one that aligns with their own views. That's pretty obvious, right? Given the explicit mention of banning the discussion of voter fraud - something the Democrats have done before, but is now "illegal".
It took me over a year to extricate myself from Gmail, and I still keep the account open and forwarding because it hasn't been a year since the last time I found something I have to migrate. I'll be thrilled someday if I can drop it, but I imagine the vestigal leftover will just persist for all time. Gmail was a very convenient service, but ultimately I can't trust them to use reasonable human judgement in administrative decisions, and so I can't trust Google with the keys to my life.
So now, a minimum wage drone at Google, or even worse an AI, will be able to shut down my entire account because they perceived my document that I'm sharing as "misinformation"? That's pretty fucking scary.
Tech realizes that shift in power and self regulates in a way favourable to the government.
This reminds me of when Reddit tossed out Ellen Pao. Reddit demanded wholesale change, so what they got /u/spez, who happily banned all manner of subreddits when the fight was originally about one.
Google is analyzing your files for content, and flagging if it doesn't align with narratives that a government body sets - and _removing all trace of that content._
If this doesn't conflict with the "hacker pathos," what the fuck is Hacker News then?
It's on the front page right now, but I'm not sure it deserves to be given the quality of HN's discussion so far. The top ten highest voted comments are essentially redundant copies of the same outrage, stated slightly differently. If everyone is just going to pile on and state how upset they are, instead of replying to each other's comments and keeping one thread for all the outrage, what's the point in bringing this to the front page at all? Where's the curious and interesting discussion? Outrage is pervasive on TV and the Internet and yet, as presented here so far, it's the most uninteresting response possible.
Why don't you be the change you want to see, and contribute constructively with your own opinion to facilitate curious and interesting discussion? Complaining about it is also not that interesting.
I am. Meta-commenting about HN is a constructive contribution to both the quality of site and, in some instances, leads to quality contributions within a specific discussion as well. The site admin 'dang' certainly does a lot more of it than I do, and regularly links to his own past meta-commentaries, so while I respect that they're uninteresting to you, I disagree.
I am, but anything I would have said was already said by one of the top ten redundant outrage comments, so I'm not wasting HN's time piling on in that regard. Upvotes are a stellar alternative to "me too" replies that contribution nothing but redundancy.
It's interesting that you interpret my reply as containing an opinion about the post topic, when it doesn't state any opinion about Google's actions at all. I don't consider it appropriate to infuse every statement I make with outrage, even if I'm outraged, because that saps the life from communities when it's a common practice. Perhaps you're (incorrectly) reading between the lines that the lack of infused outrage as some sort of statement of my opinion?
Hacker News is a place for people to discuss interesting things, while being generally associated with Y Combinator, a startup incubator.
I've said this before, and dang has corrected me, but I believe this is a good thing for Y Combinator, because it attracts smart people to the YC brand, and gives YC companies a pool of talent they can pick from to build companies.
The "Hacker" part has always, to me, been about makers, not about ethics per se. I've always found the notion that there is a shared "hacker" set of mores to be kind of curious.
Ripping things apart to see how they work vs. kludging something together to solve a specific problem vs. adversarial digital trespassing for fun and profit -- none of that lends itself, to me, to some specific shared morality. Certainly a few personality traits will show up, but folks trying to build a set of acceptable behaviors out of that will probably not find a ton of consistency across the people who engage in the aforementioned activities.
From the link: "After we are notified of a potential policy violation, we may review the content and take action, including restricting access to the content, removing the content".
Would you expect any company not to do this?
Where does it say "Google is analyzing your files for content" in the way you claim? It's not "all content that doesn't align with narratives that a government sets", it's "information about public voting procedures, political candidate eligibility based on age / birthplace, election results, or census participation that contradicts official government records.". That's a lot more specific than you imply, and a lot more like "you can't spin up an overthow-the-government movement using Google services" than "you can't disagree with the government", even if it's literally "you can't say Obama was born in Kenya" - there is apparently a human review involved.
Oh man this is extremely bad news. The highway should not get to decide which color cars are allowed through. Google et al, are only going to make the internet's underground railroad larger. Users aren't supposed to be slaves to the preferences of the companies that provide the products.
Putting the megacorps that have manipulating people on behalf of others as their primary business model in charge of determining truth. Cyberpunk dystopia, here we come (except for the cool gadgets).
Isn't the underlying main problem (censorship), as (nearly) always, growing up along with the size of the society, that is to say mistrust?
I have yet to encounter a small group of people, let's say less than ~15 people (all adult and responsible, children and fools are another matter and must be protected by adults), where someone thinks that others shouldn't access to some content (because they are irrevocably dumb or evil-minded), that he detains the Truth and has to forbid everything he dislikes.
In all huge groups (modern nations) it is the norm. As each one doesn't know most of the citizens, he/she is at best mislead into thinking of them as a bunch of children, and at worse is afraid of them.
Even if many people cannot, for the time being, tackle something useful/necessary (for example to distinguish raw bullshit from consistent information) they may "train" by being exposed to it, thus building some experience (living and dialoging with others will enlighten them).
How may one accept to live among others while thinking that most of them are dumb and unable to learn, or evil-minded? How may one who doesn't think so want to bar them from any content?
> I have yet to encounter a small group of people, let's say less than ~15 people (all adult and responsible, children and fools are another matter and must be protected by adults), where someone thinks that others shouldn't access to some content (because they are irrevocably dumb or evil-minded), that he detains the Truth and has to forbid everything he dislikes.
It probably depends upon the definition of "small-sized company" :-)
In some companies a form of screening out is in place, barring access to some online material, however it is AFAIK done for economical reasons (avoiding letting company resources used for non-work stuff, avoiding any mistake such as a dubious download, contribution or comment be associated with an IP address rented by the company...), not in order to enforce censorship. Each employee is free to access to whatever he wants... using his own time, computer and network connection.
Censorship is just another way to wield power. It has nothing to do what people should or shouldn't see. It's about controlling the population. This is associated with dopamine hit each time authoritarian enacts another restriction and they just want more and more as tolerance builds. It's a slippery slope that will end up badly if we don't stop them early in their tracks.
Once the withdrawals get really bad, it's when the army goes on the streets.
The "census participation that contradicts official government records" clause is especially wild, given the US Census Bureau's open policy of injecting small errors for privacy purposes. (https://www.census.gov/about/policies/privacy/statistical_sa...)
>could significantly undermine participation or trust in civic or democratic processes
Participation in our democratic and civic processes include the freedom to speak about our government in a way that we desire, even if it's misleading or contradicts our government. It's part of the reason why America was founded in the first place. Google is being hypocritical here.
People must be able to exchange information in a secure and private way, free of government or corporate censorship. The fact that there is no common, simple, and ubiquitous way to exchange private messages blows my mind. I wish someone would do something about it.
Google drive is not supposed to be a mass distribution platform and it actually violates the TOS to treat it like one. You can Argus all you like about privacy and free speech rights, but they do not apply to anyone trying to use Google drive as some sort of free content distribution network. It wasn't designed to let people do that, and Google has every right to ban people for violating the TOS. The fact that they typically haven't reacted to this kind of use on a large scale before now should drive home the point of just how much these idiots have started to abuse the TOS all for the purpose of distributing misleading content.
I would really love all of sorts of services (e.g. email, documents, photo storage, etc etc) prepackaged as docker images I can pop somewhere and open source, and I just pay for hosting and a licence fee.
In order to pop anywhere these days you need to have an email address. How do you ensure that this email address is not gonna be alienated from you? Email domain becomes part of your identity.
>Manipulated media: Media that has been technically manipulated or doctored in a way that misleads users and may pose a serious risk of egregious harm.
What about deepfakes and other machine-generated content?
"Misleading content" is already an issue for anyone who uses Google Merchant Center, and can cause your account to be suspended. Try getting a human being to give you a straight answer on what this means in specific cases - it's nearly impossible. Small accounts can't get through to anyone, and even if larger accounts or consultants can, the Googlers on the other end can't explain what triggered their "AI" to shut down the account.
Do not distribute content that deceives, misleads, or confuses users. This includes:
Misleading content related to civic and democratic processes: Content that is demonstrably false and could significantly undermine participation or trust in civic or democratic processes. This includes information about public voting procedures, political candidate eligibility based on age / birthplace, election results, or census participation that contradicts official government records. It also includes incorrect claims that a political figure or government official has died, been involved in an accident, or is suffering from a sudden serious illness.
Misleading content related to harmful health practices: Misleading health or medical content that promotes or encourages others to engage in practices that may lead to serious physical or emotional harm to individuals, or serious public health harm.
Manipulated media: Media that has been technically manipulated or doctored in a way that misleads users and may pose a serious risk of egregious harm.
Misleading content may be allowed in an educational, documentary, scientific, or artistic context, but please be mindful to provide enough information to help people understand this context. In some cases, no amount of context will allow this content to remain on our platforms.
Here's peak irony from Google's CEO just last week talking about "The free and open internet is under attack in countries around the world, Google boss Sundar Pichai has warned. He says many countries are restricting the flow of information, and the model is often taken for granted:
It seems to me people that a lot of people around me who are all up and arms about "Covid-19 misinformation" actively promoted this "documentary".
Last year, I tried to put up something that had a rational discussion of the problems inherent in constant testing of asymptomatic people[1]. It took three months to get all sorts of automatic bans removed. By that time, I had lost all motivation.
The fundamental problem is that it used to invoke a cost to publish anything. Now any idiot can post things for free and the viral aspect of the systems ensures it will explode. Us humans are hardwired for new and outrageous gossip and hench the ads industry is related to this. Nothing can fix this as long as publishing is cost free.
Human nature can not be fixed. Get over it and design systems which mitigate the worst part of it.
The problem with this is that people use Google Drive for a lot of purposes. Including private ones.
I really don't want Google removing my stuff because of some T&C crap when I'm not even sharing it with anyone. An external HDD is much less likely to break but it doesn't try to impose any opinions on my data. Overall the risk of data loss is much less with redundant personal storage.
At least just disable the sharing feature but don't delete it or ban the account. It makes the service completely unreliable as a storage medium. Sharing is only a tiny part of the features of these services and the only part where such limitations should apply.
For example I don't have anything really bad but I do have copyrighted stuff on my O365 like ISO images to have handy when I need to reinstall something. I'm not sharing them but if they'd be deleted anyway I'd be really really pissed.
And really, Google/MS shouldn't even be able to see what we store on Drive unless we try to share it publicly. It really should be zero-knowledge encrypted. Luckily there's apps like Cryptomator.
Not sure if related, but for the past 2 months, on a daily basis, sometimes 2-3 times per day, I get a Drive notification about someone "resolving a comment" with me tagged, and the document is naked ladies and other pornographic ads.
Generally by the time I get to the document, it's gone, and otherwise I mark it as spam, but it hasn't put a dent in the daily notifications I receive...
As i've mentioned above, Skiff, still in invite only beta looks like promising alternative [https://www.skiff.org/].
[disclaimer] I don't work for them. I waiting for the beta early access though.
The power to arbitrarily take down content on their own servers? Like I'm with you that it's an overreach but this really isn't that much power in the grand scheme of things. Don't publish on Google Drive and they can't touch you. Very very little pushes you to publish on Drive. Unlike social media where the lock-in the the ability to find an audience Drive is a glorified S3 bucket for all it matters.
One is that our assumptions and norms around free speech/press aren't from digital communications world. They're from TV & newspaper world. Letters & Telephone world. In the last generation, lines have been blurred between mass media, conversational messaging, documents, information, software. Apple's app store escalated the software part of it, AWS further confused it. What "speech" is, for the average person, is hard to draw lines around
Douglas Adam had a good bit on this. Paraphrased: It's explaining to rivers how "the coming of the ocean" will affect them. Currents and such will still exist in the ocean, but... Thus with online media. Do free speech ideals still matter? Yep. What do they mean? How do they work? What do they do? Those aren't, IMO, trivially answerable with analogies to telephone, letter and newspaper world.
Two is that whatever the hell freedoms like association, speech, assembly, media and such mean today, they run through FB, Google, etc. In parts of the world where FB have the dominant free internet deal, even moreso. It's their structures and rules that matter. Say a politician from medium-sized country X is banned from FB or Twitter, this can seriously affect their chances. Same for any kind of activist group, political party, union etc. FB policy people don't necessarily even know the language of the country. Where there are tensions between democratic rights and the new world, the tension is usually highest where a tech giant is.
Three is back to one. There is an interconnectedness here that's hard to stop. We have a digital democracy. It's young, but it's here. Whatever democracy is, happens digitally now.
I hope this isn't cynical, but I've kind of become accelerationist on this. Let's play this game. Force Google's policy courts to be public, maybe.
A Russian troll farm buys some election ads on facebook and it’s a national security crisis with years of investigations and hearings. Google decides to play god with the flow of information and I highly doubt they receive anything more than the slightest pushback from our political establishment. Our elected officials are asleep at the wheel.
FYI rclone is a great cli tool that can let you easily pull all content out of a Google drive and push it to several different easily configured backends, or store locally.
It can also be used to encrypt all content on your drive, presumably hiding content from big brother, but also making it useless for sharing.
It's not entirely clear to me which of these policies apply to any and all content you upload without sharing externally.
Both the misleading content and sexually explicit material sections start with "Do not distribute content...". So does that mean as long as I don't enable sharing, I'm free to upload such content? If I'm the only person who can access these files, then I'm not distributing it surely.
I also find it curious the section on circumvention makes no mention of the use of encryption. I suppose doing so would show imply Google scans your content and use such techniques to find files in violation of their policies. With that not explicitly banned, I'll continue to use Cryptomator to keep my private documents out of Google's hands.
I'll be cancelling and migrating away from Google as soon as I have a free weekend to identify alternatives and get it done.
Now to find a vanity email domain... My last name is taken, so brainstorming something that is professional and individualized could take longer than the migration...
Are they insane? We routinely share important documents in Google Drive. Boring, non-controversial stuff, but vital. We can't run the risk of a _good_ ML algorithm, the type that outputs a wrong decision 1 in 1000, deleting it. Can you imagine the nightmare?
I recently started working on a program that distributes my own relational data in my web browser (IndexedDB) between my devices using WebRTC so that I can build applications on top of it. It seems like it will be needed more and more over time.
Well isn't this just rich (in a poor way), considering Google is notorious over the years for making the display of their ad placements harder and harder to differentiate from actual search results, thus more and more misleading....
Google provides a convenient, though not instantaneous, way to download all your content to a local drive. You can then publish it to your own website, problem solved.
Google has the right to impose terms of service, particularly given that their service is free and they believe they have a reputation to protect and legal liabilities to defend from.
Misleading title. Google Drive bans *sharing* "Misleading Content," which means they can disable sharing if a doc containing blatant misinformation starts to be widely shared on other platforms. (Edit: This doesn't mean they'll go into your drive and remove doubleplusungood content)
Google Drive is now widely used as a social media platform -- I even get spam from google drive share notifications now! And so I'm sure they will stumble in all the ways that all the platforms do around content moderation.
"In some cases, no amount of context will allow this content to remain on our platforms..." sounds exactly like they will "go into your drive and remove doubleplusungood content."
“After we are notified of a potential policy violation, we may review the content and take action, including restricting access to the content, removing the content, and limiting or terminating a user’s access to Google products.”
Man, we are completely unprepared for the level of control it is not possible to have over speech. I guess I get where Google is coming from, maybe, but this is getting downright Orwellian.
Can all the people moralizing, grandstanding, and otherwise signaling their outrage stop using and recommending Google products? Yes, even if you feel like the alternatives are inferior.
The problem essentially was created by tech companies such as Google and Facebook. The fact is that everything you are reading online are from both ends of the political spectrum, the majority middle do not speak out, the same as in real society. Algorithms using likes to amply voices from both extremes. The people in the middle don't care about liking it, but having to bear the result of recommendation algorithms, and can't help to be political.
It's very inconsistent. How gender, as a purely ideological construct not grounded in science, can be protected under the "hate speech", while at the same time, under the misleading content, it could be easily classified as "Misleading content related to harmful health practices".
In principle, it also boils down to who owns the content. If host - Google, claims moderation rights, it undebatably becomes a publisher and should be treated as such.
I would argue that the very notion that we're somehow dealing with a nefarious, shadowy era of exceptionally pernicious misinformation is itself a form of misinformation. I strongly suspect that the current media and associated tech company obsession with misinformation is largely rooted in a certain very disagreeable politician winning the U.S presidency in 2016 despite all the best efforts of major media sources to convince everyone of what was supposedly best for them, and because these same media/tech interests share a political bent that's currently more dominant in the social sphere, suddenly, misinformation is a boogeyman as never seen before. Evidence to the contrary be damned.
Humans have been misinforming each other and creating echo chambers of rigid thought for as long as we've had culture. On this little has changed except for new additions to the mediums through which we can communicate ideas.
Censorship and restriction of opinions to those that are "verifiable and correct" is such an obvious hole for bad decisions and misuse that support for it should be laughable after decades of seeing just how badly governments and other major authorities can fuck up their notions about what a correct stance on anything really is, and this not to even mention the many, many, many historical examples of official narratives deliberately being manipulated for the sake of control or special interests... Those of you who think modern democratic institutions are somehow immune enough to such things that we should have public free expression monitored and restricted due to misinformation dangers are absurdly naive.
This is ridiculous, I agree with most the top comments about a need for a new platform.
But if you have such “misleading” content as Big Brother sees it, shield it from them.
Encrypt the content, give out the key. Even if it’s a simple key, who cares. I’d bet you could even share it as a doc to other people, their internal algorithms surely aren’t sophisticated enough to figure it out.
>> "Misleading content related to harmful health practices: Misleading health or medical content that promotes or encourages others to engage in practices that may lead to serious physical or emotional harm to individuals, or serious public health harm."
One can hope that this will apply to the massive anti-vax campaigns on YouTube
We need distributed, uncensorable social media - full stop. And we need it now. We already have the White House admitting that they are telling Facebook which posts they should delete. We now have Google deciding what is and isn't misleading content.
This kind of behavior has historically had disastrous results. See World War II.
Google turns over user data on any Google user instantly to the US government without a search warrant, including that of US citizens. Anything in Google is free for the taking by the feds.
There are already sufficient reasons to stop trusting them with your data. This is not the first and it won't be the last.
"Do not distribute content that deceives, misleads, or confuses users."
Confuses?
That is a really bad wording. Watch a pro-foam insulation video, than an against-foam insulation video - which one is right? One is confusing the audience, both can't be right. Which is the one that can't be distributed?
As a common carrier they should not be allowed to discriminate. Like railroads cannot discriminate against traffic they do not like, and restaurants and hotels cannot discriminate. As a society we decided long ago that if you hold yourself out to the public you cannot discriminate
This. Also, 'misleading' is a mechanism to how content is used, not the content itself. So content in one use case may be misleading and in another, it is not. How content is used matters. How is google to judge this?
I wish they would solve this malicious/spam sharing issue.
I have an abusive ex who regularly still sends me stuff by sharing it on Google Drive. Google will not block it as it doesn't meet their standard of harassment, and even if they did, it often comes from a new account.
Is "misleading content" going to be determined on a per-country basis? Will Google ban activists in Hong Kong from distributing "misleading content" about the PRC? How about people in Hungary sharing "misleading" resources for LGBT youth?
Call me old fashioned, but in my day spies, subversives, and criminals kept their communications secret instead of hosting them in plaintext on the servers of a megacorporation almost certainly hellbent against their cause and ready to turn them over to the feds.
I don't have much experience with Googles policies, but is this some "report only" type of thing.
That means it will only be enforced if someone reports a file I created or should I expect some google employees or ai to scan my files for the mentioned content?
OK, fine. If I share a document with all of Hacker News saying, Google will delete this document without my express written permission, and they apply their policy without exception:
Do they delete the document because it's misleading? If they do, it's not.
This seems like a move to lessen upcoming monopoly legislation by preemptively banning stuff that the government is about to deem "misleading" or "dangerous."
Agree totally. Both parties in the USA have had public hearing of the large tech companies. They were effectively given the ultimatum, censor or be be broken up. There was a fair bit of overlap but during the questions, the democrats were pro removal of fake news (this was during Trump's term) and the republicans were anti non competition, anti monopoly.
I doubt that platform to publisher is a wise move but it's probably the only one they could do.
As an aside, during the hearings it was Amazon that seemed less concerned...
Of course everything is in the interest of the public, so everything is going to be transparent, accountable, appealable, based on clear rules. As usual for Google.
People often forget that "slippery slope" is a fallacy. Political censorship is indeed of general concern; censoring anti-vax bullshit does not imply an inevitable path to political censorship.
The real solution, of course, is to out the hypocrisy and malfeasance of the spineless GOP/Fox leadership that panders to this ignorance, and gives it mainstream legitimacy. Without that legitimacy, these conspiracy theories will fade back into background noise where they belong.
> People often forget that "slippery slope" is a fallacy.
This is an extremely common misunderstanding. Slippery slopes aren't necessarily fallacious, the slippery slope fallacy isn't trying to say this. Many slippery slopes have occurred throughout history. The slippery slope fallacy pertains to a certain type of slippery slope argument.
If there's one thing that's became way more obvious to me over the past year and a half is just how susceptible a lot of people are to conspiracy theories and other similar content.
In an ideal world we would have different solutions, however we don't live in a perfect world so I welcome decisions like this. Yes it does open doors for more government control and censorship, but I'd take that if it means reducing conspiracy theories, racism, etc.
Life isn't a computer program that you can define with EXTREMELY specific if statements, like HN likes to believe.
So I'm happy with statements such as "Content that is demonstrably false" from google. And I can be in favour of supporting more regulation, but also against regulation that's demonstrably harmful even if I can't write you an extremely specific definition of what harmful of "demonstrably false" is that would cover every single imaginable edge case.
It'll be scary the day Google develops an API for this and any where you go it follows you. You'll no longer be able to store documents that Google doesn't see fit, they'll simply get auto-censored. What if Apple developed a similar API and embedded into Mac OS, just like auto-complete that works on any input box, "auto-censor" makes sure you're not going out of line as you type sentences and paragraphs.
> "This includes information about public voting procedures, political candidate eligibility based on age / birthplace, election results, or census participation that contradicts official government records."
Google going full Russia/China now. Democracy and free speech certainly declined in value when you are not allowed to document and host alleged voter fraud anymore.
It sort of makes sense, if it is a document, that is public or widely shared (anyone with link can access), in which case it serve not only as a collaboration tool, but also a sort of "CDN". Though even then, limiting access seems appropriate, "removing the content, and limiting or terminating a user’s access to Google products" is too much.
Another asshole move from Google. How, might you wonder, do you report abuse? Do they actually read anything sent to "abuse@google.com"? Hell no.
They expect you to visit phishing / spam links hosted by Google, then use a button on the page to report. Or they expect you to be logged in to Google.
They don't care a tiny bit about RFCs or about doing the right thing.
Nifty, censorship built right into office tools now too. It's so great my word processor and my spreadsheet app can police my thoughts now too.
Maybe they'll be able to use AI at some point and just detect when I'm writing the wrong combination of words together and preemptively block me from even writing things.
Companies should really decide and be either platforms (only remove directly illegal material, when reported, and carry no responsibility for other stuff) or a published (cherry pick what they want posted/hosted, and carry all the responsibility for the posted content.
The very idea of corporations deciding on what is correct thought should shake people to the core regardless of ideology. Some might come up with hair-splitting justifications but this will come back to bite all of us and we let it happen. I'd bet the bank on it.
The more any of these companies start to filter not-illegal content,the less it becomes justified to keep them exempt from liability for the things the _do_ allow. Because at some point we are reaching that 'have your cake and eat it' point.
A government is the administrator chosen by the people to take care of its collective concerns.
If the people identify negative issues that affect the demos - they may ask their government to interfere. In any democracy this power is limitless, with the minor caveat of supranational authorities like the UN.
In that way - any government can and should censor speech if its populace asks it to, given it’s the legislature who sets the law. (Prime example: German laws about Nazi symbols).
A company, as Zuckerberg requested, should not be in the position to make these decisions arbitrarily. It should suffice if it follows the law.
In America the legislative breakdown, in combination with the fanatic believe in free speech, seems to be standing in the way of this, bringing the entire western world to the brinks of a precipice.
I'm sure they will immediately start their crackdown on nigh all religious scriptures hosted there, not to mention on many “scientific facts” that become public knowledge but were actually disproved a long time ago.
Idiots found the right place to publish the knowledge. Google drive, yeah. If they were true scientists, that would be an excuse. Scientists are unaware of popular technology, they deal with pure science.
So I downloaded the PDF and took a look at it. Its 80 MB of screenshots and photos of newspaper articles of reports of people which died because of the vaccine (or at least shortly after having gotten it).
I wouldn't consider it misinformation, but it is misleading.
There can be several causes of serious side effects, most of which I believe to be either contamination, a pre-existing condition which got its last kick due to a component of the vaccine, or something like this.
In any case, the document notes that around 2500 people have died, and according to what I checked in the database of the CDC, now around 4500 have died. This includes those who accidentally fell down the stair after tripping over their cat, but who got their vaccine a couple of hours before, but that doesn't matter.
So from 34,000,000 Covid-19 cases in the US 608,500 died = 1.79%
From 334,927,961 administered doses in the US 4,434 died = 0.0013%
If you are vaccinated you're protected by at least 40% (used to be 90+) against getting infected and if you do, it will me more likely that you won't have to go to the hospital.
There are a lot of people who tend to ignore this comparison, probably because it makes them feel more important to know about these deaths and we, the sheep, just get silently vaccinated, because they tell us to do so.
The point is, that it has benefits, and also some minor risks, to get vaccinated, yet the article attempts to make you believe the contrary, that the risks are too high. So yes, it is misleading, but in a somewhat dangerous way. There are people who are easily influenced by this kind of narrative.
Now, the question is still open for debate: should Google be allowed to decide if content like this can't be shared over their platform.
Is this part of the general push by the White House to censor such content across all communication channels? I know Biden was going to push telecoms to monitor and I think even censor text messages that spread anything about COVID that doesn't agree with the official narrative.
And who is to judge what information is “misleading”? Google staff?
I am glad I no longer use Google Drive for anything serious but I am terrified at the prospect of additional services going this way.
In the end we will have to migrate back to using local hardware that we can control ourselves and manage all backups and availability ourselves.
This policy follows recent years trend against liberty. Though that trend is long - it has accelerated terribly ever since Brexit, Trump and similar events where people chose differently than their supposed leaders had intended.
People who thought this would stop at banning Alex Jones may want to reconsider. You are next.
Like many on HN, I rely on Google's services to a degree that I find worrying. The reason for my immobilism is the usual one: Google's services are extremely convenient, and switching to alternatives is extremely costly in time.
Like many of you, I make it a point to investigate alternatives to G-Suite, Android, etc. Sadly, the answer to "is there a practical alternative to Google" has so far been a resounding "no".
After thinking about this some more, I'm beginning to think the question is ill-posed. I'd like to ask a better question, and I'd really appreciate your input. But first, let me list a handful of observations about my behavior as a user and customer that I think are important.
== OBSERVATIONS ==
OBSERVATION 1: I don't mind paying for software, I just hate accounting.
I don't often buy software, especially subscription-based SaaS products. For the longest time, I thought my aversion had to do with frugality, but I now realize it has to do with peace of mind. I refuse to accept the burden of tracking my subscriptions, being responsible for cancelling them when no longer in use, and chasing down the odd unexpected charge on my bank statement, or trying to remember how much a subscription costs and whether or not to cancel before its automatic renewal. Life is too short. The mere thought of such a snake-it provokes a response approximating an anxious rage. My reaction is visceral; one cannot reason with me on this point.
Subscriptions-management services don't solve the problem. They might remove accidental complexities that emerge from managing subscriptions -- visualizing pricing side-by-side, alerting me when a subscription is about to expire, performing the (un)subscription automatically, etc. -- but it is impossible for them to solve the essential complexity of subscription management: thinking about subscriptions.
OBSERVATION 2: I am a software developer, but I am not an advanced user.
I am technically literate, but most of the work I do in G-Suite is basic. I draft documents, often collaboratively. I use track-changes and comments. I use Times New Roman in 12-point font. I sometimes make basic slide decks, using the default theme and paying no regard to aesthetics beyond ensuring proper alignment. I rarely use spreadsheets, but when I do, I use the basic functions, and rudimentary formatting.
Same goes with email. Helvetica, ll-pt. Attachments.
OBSERVATION 3: Cloud storage is actually pretty great, but I mistrust you deeply.
In a utopian world, Cloud storage and SaaS software would be perfect! I almost always have access to the internet when I need it, and during the rare times when I don't, it can wait an hour or two. In this utopia, I could take full advantage of Cloud/SaaS offerings, effectively freeing myself of the need to store, organize, back-up and garbage-collect local copies. It would be great!
Only, here's the issue: I can't do that. I can't do it because SaaS platforms can't be trusted to have adequate disaster recovery in place, spy on me, routinely hold my livelihood hostage [0], and might arbitrarily deplatform me.
As such, I find myself in the ironical situation of having to manually backup my Google documents by exporting them to RTF and storing them in a well-organized folder hierarchy for a rainy day. My life has actually gotten a bit worse, but not so bad that I'm willing to give up collaborative editing.
== QUESTION ==
Taken together, the above observations point to something approximating:
1. Self-hosted service. Or, at least, some kind of ability to continue working in the event that the Cloud-based mothership fails or banishes me. Such self-hosting should be prepackaged and easy to install/configure.
2. Non-subscription based monetization. More generally: non-recurring expenses.
3. Optional cloud-based storage. Perhaps just a dumb S3-style API into which data can be backed up?
I'm not quite sure how Google's announcement that Drive is now subject to censorship comes as a shock to anyone. Google already heavily censors their search results and I am concerned that no one is talking about it in relation to this. As bad as the voilation of privacy and dictatorial intentions that now apply to Drive are, I believe that search result augmentation is far worse. In absolute terms, now that "to Google" is synonymous to every possible form of "to search", "to know", or to "to discover" it as an organization
is actively crushing independent thinking. Throughout many conversations with younger folks and even hapless adults, I have discovered the prevailing attitude toward the act of thinking to have diverged outside of the notion of self. When something is to be discovered, it is to be searched for. There is no critical thinking involved in this process. Due to the immediacy of seemingly correct or cogent information, people have ceased relying on any other metric in the evaluation of information besides a measurement of consensus. The problem is that they directly associate this measurement with the rank of a result in a search engine. The thinking goes that because it is ranked towards the top it must be more true.
But what happens when truthiness is only measured through consensus and commonness? The most dire and direct consequence is that the rate of convergence toward true diversity of thought is absolutely flattened. This artifact is one of the main causes in the existential division in the U.S. The real crisis is that the ranking of the truthiness of an opinion on important social issues that are presented through Google's search results is treated and evaluated like objective fact. When presented with only opinion and information about social issues that one strongly agrees with, the disciplined and liberal thinker might imagine themselves on the other side of the situation and consider what to do when faced with only
information that he or she disagrees with. What then?
As an example, try searching Google for the phrase "Systemic racism is not real". The only results that appear are in opposition to the assertion. The searcher is then faced with a quandary about whether to question the reality that all opposing points of view are absent. But as I've pointed out, the new default to laziness and consensus acceptance guarantees that the searcher's true belief about this complex issue will default to what is offered. The noble cause of the searcher, to believe in an America where Good and Honest people are free to seek others who wish to treat other humans based on their character rather than a pitifully shallow discriminant as melanin level, to learn about the noble spirit of their own country, and who hate to be told that they must be complicit in some sort of hellish oppression of others are faced with nothing but a brutal emptiness. No wonder everyone thinks America is damned and doomed! No wonder we all feel so alone. We sit in our apartments and stare at the consensus feed. We let it alter our emotions and perceptions without thinking for ourselves. We live in vast cities of millions of individuals who want nothing more than to be fed the consensus and forget our neighbors. By censoring and skewing social issues like this we are forced to abandon the liberal notions that are the foundation of Western thought. What free thinker is brave enough to fight the shallow consensus of millions of lazy thinkers who are being spoon fed perspectives of a select few far left technocrats? This is the modern book burning. The Arbiter of the zeitgeist sits behind an innocent text box and it grows ever more powerful.
I challenge the reader to be truly disciplined in their analysis, set aside their political leanings, and objectively consider the perspective of a right winger who just saw Twitter silence their leader and watched Amazon boot the only social network where they could speak their mind, Parler, out of existence. We discuss these companies
as if they are just organizations who are just private companies out to make money but any rational person can look at these actions and evaluate them to be political forces that are heavily biased and who are asserting their control. We circle endlessly around discussions about whether they are publishers or platforms and it is obvious that they are both. Google has intentionally steered the very nature of the internet into their control. The only way to be known is to advertise on their platform and you
must pay them to do that. They protect this modality at all costs and offer the free services that we are discussing as an indirection away from their intent. Make no mistake: Google wants to become the internet. But the horrible reality is that they already have and no one has noticed. Google and Amazon alone control an almost
absolute majority of servers that host the entire internet. They could turn it off if they so chose. Companies are willingly giving control of the critical parts of their business to these companies. If Amazon and Google go down every business that hosts on their servers is doomed.
Microsoft is just as bad. They are actively trying to take away the control of user's computers by hosting Windows in the cloud. What happens when they only make it available in the cloud on their servers? Worse still, we developers are giving up all of our control to them by hosting everything on Github. What did they do when we made all of our code and expertise avialable to them for free? They trained an AI on it with the intention of centralizing the very skill we gave them and are going to try to sell it back to us in the form of Copilot. What happens when your employer only trusts you to verify Copilot's output and not write anything for yourself? What happens to the next generation of programmers who don't even know how to write code without their help? Microsoft's seemingly benevolent attitude towards developers is just another example of the damage centralization is causing.
This announcement is just another assertion of totalitarian control. The ruthless centralization these companies are seeking is an existential danger to everyone. The threat is critical! We are in mortal danger. We have given them complete control and are doing nothing about it.
The only option that can save us is to build an alternative and compete with them.
I've seen multiple comments assigning Googles illiberal behavior to Chinese influence or pressure.
This is patently false, Big Tech is beholden to Western dictate. There is currently a coordinated effort by non-partisan forces to push the United States into a war with China. Don't fall for it.
While worrisome, I'm surprised HN has essentially nothing negative to say about people spreading vaccine misinformation, peddling The Big Lie, or other conspiratorial zeitgeist stuff from the last 5ish years.
This virus is the prime example of why most people here are against thingd like this. Look how many times the government officials have flipped on all of this virus stuff. One day it is completely debunked, discredited and misinformation to suggest the virus came from a lab. The next day it is a plausible theory. We cannot trust banning misinformation because we don't know what is actually misinformation. What is misinformation today is accurate information tomorrow.
Your perception of the lab leak "flip flop" is exaggerating what actually happened.
This is a really dangerous road to go down. Yesterday's misinformation is still largely if not entirely misinformation. The same will hold true into the future.
A lot of the world's woes can be traced back to people operating on bad information.
First, I didn't say flip flop. I said flipped. Once the authorities changed their mind on something they didn't go back to the previous opinion. If you are going to put quotes around text at least make it something I actually said.
Next, the lab leak is just an example. You can look at other things like masks, whether or not it is airborne, and other things like that. I understand they didn't have all the available information but they sure sounded 100% positive they knew what they were talking about.
Imagine if we could not ever change our societal views on anything? Once something is determined anything else is considered misinformation. I am sure you hold a view that is not the view held by the majority. How would you like any facts you state about that thing banned and labeled misinformation? That is what will happen if this continues.
The only thing that's changed is that it's more widely acknowledged that a lab leak is possible, and just as likely as natural origin. Other viruses have followed a similar origin story.
People were literally banned for saying the lab leak was a possibility over misinformation claims. If they or the distribution of said content was still being banned people may still discount the lab leak. That is the problem with banning content that may be misinformation. If we had 100% accuracy on what is misinformation that would be one thing, but we don't.
Can you show an example of someone being banned somewhere for this? If anything I can see people being banned for pushing any number of conspiracy theories when what they're actually doing is pushing a racist agenda of some kind.
To end the dehumanizing scourge of Nazism, racism, bigoted thinking, imperialism, colonialism, capitalism, conservatism, republicanism, white supremacy, sexism, Islamophobia, Asian hate, transphobia, Satanic Panics, antisemitism, and reactionary ruralism, and libertarian "freedoms" some sacrifices must be made.
Google Drive is on the right side of history with this move. I just wish people would accept that to stop the greatest evils in the known universe, it's okay to take firm and bold stances like this.
Almost everybody here cheered when Twitter and Facebook started censoring and 'fact-checking' Trump. Few voices who said that this will end badly were drowned by the rejoicing crowd, not thinking about the consequences.
News such as this is a direct consequence of allowing big tech to become arbiters of 'truth'. And yes, it will be much worse than this.
The same Whitehouse that is pushing tech companies to do this also said today that if you’re banned from one social media platform (for posting verboten material), then you should immediately be banned from all platforms.
This is third world authoritarianism, except they outsource the dirty work to the tech companies.
I know everybody hates playing the “If Trump did...” game, but come on! You all know exactly what the response would be to something like this and it’s the correct response.
Imagine the same policy with GMail. Captive audience, because changing your address after 10 years is complicated. And suddenly Google decides what ideas are you allowed to discuss over e-mail with other people. Because that is distribution, m'kay?
> Misleading content may be allowed in an educational, documentary, scientific, or artistic context, but please be mindful to provide enough information to help people understand this context. In some cases, no amount of context will allow this content to remain on our platforms.
They never said they would. You can store all the misleading content you like in your Google Drive. Using Google drive as a distribution platform to share misleading content is what they are moderating.
Claiming otherwise is completely disingenuous, as it intentionally obscures core features of the service. It would be like saying "Twitter is an instant messaging service, I don't expect them to moderate any of my content" after a public tweet gets deleted.
If this doesn’t convince you that Google of today is nothing more than the unofficial thought-police agency for the political left, I guess nothing will.
So, any company that works on, for example, civic or democratic processes, may lose all their documents on google drive. CONGRATULATIONS!!!
The next step would be your gmail box.
I see Google got the DNC's memo. Gov't is asking private corporations to censor your SMS messages, because gov't itself can't. They want to censor every form of digital communication, period. We either have free thought and expression of ideas, or we all live under someone else's ideals of what those are, and most likely, if history has taught us anything....you won't like it but it will be too late. Your voice will just be gone, unless it's full of praise for whoever your "dear leader" ends up being. This isn't done. More will come. We'll hem and haw when we hear about it, but keep on keeping on...right off the cliff.
Free speech is not easy, but it is an absolute necessity unless we want to revisit some of histories uglier chapters, and all I see is pedal to the metal.
> And this campaign is far from a single-pronged strategy. According to Politico, the Democratic National Committee (DNC) and “Biden-allied groups” – whatever that last phrase means – have plans to “engage fact-checkers more aggressively” and “work with SMS carriers to dispel misinformation about vaccines that is sent over social media and text messages.”
Nailed it. Government knows it would be difficult to implement vaccine passports (stateside anyways) so they have publicly and privately asked private business to check vaccination status, bar entry etc for them.
Same goes for the Democrat Party asking Facebook to flag opposition comments regarding Covid. Government is shacked by the 1st amendment, corporations aren't.
Such rules are probably not for everyday enforcement. It is more like Russia laws - have something to use when needed and/or to publicly demonstrate that you care about the issue.
And to be completely consistent with their censorship and account termination on Youtube, their app store, and ads, their justification for banning "misleading content" will be opaque and frustrating. And if they screw up, you'll get the old "inadvertent error" or "administrative error" catch all.
The support and excuses for why this should be acceptable is pretty much summed up as: as long as it's not my ox being gored it's fine!
We would have been similarly protected from Galileo’s misinformation too, right? I guess I’d prefer we not censor “wrong” ideas. The bigger the government policy footprint, the more trolley problems are bungled. Humility is understanding that sometimes less is more. “Let it be.”
People aren't seeing the forest for the trees. The establishment had a near perfect lock on the narrative before the internet became popular. There was a very tight Overton window on TV, radio, and publications that didn't necessarily match what people were thinking about or wanted to hear about. This was intentional.
When the internet broke into the mainstream, it was a strong feature, not a bug, that content was hard to censor, and sites like Google got a lot of their earlier traction due to their results not being gamed or massaged for profit.
You could hear the establishment gears grinding though. They were gonna take back control of the narrative even if it took decades. Well it did, and they have, to a degree. It's only going to keep getting worse as long as they can continue to wrest further control.
Do you think that only leaders of other countries like China want to control what people see, hear, and think? Do you recognize the immense power that comes from narrative control, militarily and financially?
edit: for those downvoting, would you care to point out flaws in my statements?
Just to be clear, this is about the "Report Abuse" button. This page lists categories of things that, if someone clicks on "Report Abuse", Google may decide is indeed abuse, and take action. So this does not apply to private files, but to files for which someone with access to the file decided to complain. Note the page title, and that it says:
> After we are notified of a potential policy violation, we may review the content and take action…
There are a lot of comments here. Many may indeed hold the opinion that even when someone clicks on "Report Abuse" for a file being distributed via Google Drive on the grounds of "misleading content", then Google shouldn't take action like "restricting access to the content, removing the content, and limiting or terminating a user’s access to Google products". That's a valid position, and legitimately a criticism one can hold, and this discussion makes sense. But there are also quite a few comments imagining this to be about Google proactively scanning everyone's private files, so just making this clear. (The submitter probably understands the distinction as they put "distribution" in the title, but clearly, some comments do not.)
(Disclaimer: I work at Google but just posting as myself; have no special information here but this is just my obvious reading of the page.)
This move shouldn’t surprise anyone. This was always the next logical step for activist companies and institutions that have been openly practicing authoritarian censorship. Their previous moves faced no real consequence or meaningful pushback, and they face little competition because they are monopolistic, so why would they stop marching down this path?
What’s more surprising is that there are people here on HN who are excusing this by claiming that Google isn’t banning hosting the content but only distributing it. We need a new phrase for this kind of unhelpful trivialization and gaslighting - it isn’t just harmless trolling.
The biggest impact of this will be from how inconsistently this will be applied. Google will not ban activists from sharing “toolkits” used to organize riots or push left politics. They won’t stop false content like the core claims of the 1619 project from being shared via BLM materials for schools that are hosted on Google drive. Google will suppress one side and in effect amplify all others, propagandizing the world through their services.
It’s time we seek out alternatives, break up big tech companies, regulate them, and put an end to their abuse of power. For now here are some alternatives to Google:
First, it's not about private files, it's about distributing content.
Google isn't spying on your private files, but does scan them when you share them publicly. E.g. keep all the pirated movies you want on your Drive, and even give private access to friends, but the moment you make them publicly viewable Google scans them and limits access accordingly. So no, this isn't applying to your private diary or privately shared documents.
And second, to those who claim absolute free speech with no limits -- notice that the two main categories here are related to democracy and health. All our legal protections ultimately depend on a democratic foundation -- undo that with misinformation and you don't have anything anymore. Similarly, your rights don't matter much if you're dead. Companies aren't allowed to advertise rat poison as medicine and neither are you.