I don't think he is saying that encrypted communication should be illegal. He's saying that spreading lies through encrypted channels should be stopped. It would not be hard for WhatsApp to prevent people from spreading bogus videos, because they have access to the unencrypted information before and after sending. So either preventing people from sharing misinformation or preventing people from reading the misinformation. That's separate from the argument about whether or not it's a good thing (I wouldn't see a problem with it personally).
You know, Snopes has said that running a fact checking organisation is incredibly difficult. It's easy to see why, you have to:
1) decide what the definition of truth is
2) differentiate between subjectivity and objectivity
3) differentiate between misleading and outright incorrect
4) investigate every piece of media thoroughly
5) avoid bias
6) peer review
7) correct any mistakes
And this list is just off the top of my head.
So what bill gates is saying is that you have to do all this at a scale of 1 billion users, all controlled roughly by a few centralised organisations. I think Bill's words are still kind of stupid even with context. The way to fight all of these problems has always been education, and social welfare, and things the government should ACTUALLY be doing, not vetting encryption schemes.
No one is saying we need to replace Snopes or emulate what they are doing. But if a platform like WhatsApp, Facebook or Twitter fact checked the most spread thousands videos per day, this problem would evaporate. You could do that with a team of 5 people. But you would need the will to do it, which is the part I think is lacking.
You'd think given the amount of time he's spent working on Coronavirus that he might see the distinction between treating the symptoms and treating the cause, so to speak.
I've yet to see anyone provide any kind of evidence that suggests censorship changes people's minds about conspiracies - if anything it seems to be doing exactly the opposite.
He is thinking about the root cause, but it's at a higher level. We don't have to change people's minds about conspiracies if they are never exposed to those false conspiracies in the first place. This also applies to other corrupting influences like hate and bigotry.
Very few people are actively seeking to become radicalized into some fringe movement. It instead happens more passively or through active recruitment by another member. Negative content is normalized when it appears on a platform intertwined with mainstream content. People stop viewing it as fringe and it becomes easier for people to be passively radicalized or recruited. When that content is only available on sites dedicated to that content, people aren't going to stumble upon it without recognizing what it truly is. There are plenty of stories about people being accidentally radicalized by what they see on Facebook, YouTube, Reddit, etc. You aren't going to accidentally be radicalized by visiting Storm Front.
I'm not sure there is evidence for or against. The argument I would make is not about censorship but preventing people from spreading lies. Even in the US where people hold free speech as some amazingly virtuous thing there are still curbs on speech, and what I'm saying is these limitations can be enforced in WhatsApp. Maybe you think that is censorship maybe you don't.
Also on that point, the freedom of speech in the US is strongly correlated with conspiracy theories, if only because 1) you can freely spread malicious lies and 2) it can be in a lot of people's interests to do so. I think it's always been in people's interest to spread lies, but I think recently in the very partisan climate in the US the media (and I'm including social media there) have happened upon the discovery that spreading this misinformation is actually very very good for business. It generates outrage in the opposite camp and keeps one side of the divide entertained.
> The argument I would make is not about censorship but preventing people from spreading lies.
Limiting speech is by definition censorship. There's no value judgement required as to the quality or veracity of that speech to determine whether or not restricting it counts as censorship.
> the freedom of speech in the US is strongly correlated with conspiracy theories
...and water is highly correlated with drowning. We don't say 'water bad, less water'. Just because one thing is a prerequisite for something else doesn't mean it is responsible for that other thing. We have to look to, like you point out, the political climate, as well as the lack of trust in institutions, the education system, and many other factors. Taking a complex issue around how information is shared at an unprecedented speed and scale and saying "just ban it" is, I think, a shallow assessment.
But discouraging the spreading of a message at each spreader, for example by letting the spreader know the message is tagged as misleading by some large number of people before they spread it further is not censorship.
That may limit the rate, intensity, and real-world effect of some kinds of information, without fundamentally limiting the right to free speech.
>censorship but preventing people from spreading lies. Even in the US where people hold free speech as some amazingly virtuous thing there are still curbs on speech
>I'm saying is these limitations can be enforced in WhatsApp.
1) Are you speaking of constitutional curbs on speech (because there aren't many)? Or are you talking about curbing speech that is believed to yield negative outcomes?
2) How, precisely, could these be enforced? I ask because automated moderation at scale is impossible to do responsibly and consistently. This will not change anytime soon.
3) Gates seems to be demonizing encryption. Few options can logically follow that position. Either Gates advocates replacing beneficial (to everyone) encryption with vulnerable (to everyone) encryption or he advocates banning encryption outright. It's difficult to see how either of these outcomes would benefit anyone (other than authoritarian governments and similarly repressive interests).
Even if Gates got his way on #3 (make everyone more vulnerable), #2 remains technically impossible.
Cancel culture probably should be #2, it's debatable if it even scratches the surface of achieving #2. It seems cancel culture is not interested in people being dishonest, or spreading misinformation intentionally and maliciously, it's more concerned with enforcing new social mores against dredged up old statements/actions. That could be a good thing, in and of itself, but it doesn't help solve the spread of misinformation and conspiracy theories.
Well, for starters (given two-way communication) you can ask them for the sources of their claims, or at least demand that they defend the logic behind it. Then you can take it from there, and unravel the fallacies for them. Or if you want to be more polite, then carefully point the fallacies out for them and ask what they think about it themselves, when looking more closely at it. This would be the pedagogic approach, for those of you who have the patience.
And if it all falls apart (like it sometimes does when people are faced with their own failure), then you can at least ask them to give arguments instead of ad homs. I always try to be polite the first time around, but if they double down, I let them have it. But really, if it ever falls that low, it usually means that you already won, and so you don't really need to bother any further with the discussion.
My thinking is that most people who read such discussions can think for themselves, and so giving good arguments, unraveling faulty logic, and showing the truth, will always let truth and logic prevail in the end.
Usually you will never get a person to admit that he's wrong anyway, at least not to your face, so don't even worry about it! But people do change their minds about things. It usually happens in private, especially if they just facepalmed right into their own flawed logic. So if they do, never gloat and pretend like nothing happened, and rather commend them for telling the truth later on.
Perhaps I'm naïve, but I enjoy staying positive like that. :)
Mere faceless fake news and conspiracy theories, however, simply needs highlighting and debunking. There are several sites that specialize in that already, with various success. It's not perfect, but IMHO it's preferrable to outright censorship. Because who can be the final arbiter of truth anyway...
> It would not be hard for WhatsApp to prevent people from spreading bogus videos
The problem is there’s no agreed upon definition for what’s “bogus”. A while ago someone commented here with a list of statements ranging from obviously false obviously true to show how hard the problem is, I wish I could find it.
Several commenters point out end-to-end encryption would prevent filtering or tagging messages.
But that's not true. The message analysis could be done at either endpoint without violating privacy.
Tagging (or removing) a message before you send/forward it, or after you receive it, with "the central message of this comment has been tagged as "probably a hoax" by hoaxtracker.com; check out this CDC notice <here> to learn more".
<here> does not need to be a URL which reveals much other than your general interest in the subject. But if that seems too revealing, it could already be already available as part of the endpoint's filtering data and readable locally.
Lots of people forward (retweet), or write a little something before resharing what is false or misleading information, not realising they're doing so. I would not be surprised if getting those tags, rarely enough to stand out, before they send the message would cause some people to hesitate and check/think a bit more before sending. Maybe rephrase their attached comment into a question rather than confident outrage.
Technically this is not much different from privacy-preserving spam filtering.
Yes, they can analyze it in the app on user devices before it is encrypted and transmitted. The app on user decide need access to clear text in order to encrypt the message. Same on the receiving end. The app on user device can analyze the message once it is decrypted on user device.
Wow did I not read something similar last couple of days about on-device machine learning being better than in the cloud machine learning and how Apple got that right. That's where this may also be going then if we follow your train of thought.
On device is where you want it if it's going to analyse really private data, or something effectively your own (such as homomorphic encryption, or a link to your own computers elsewhere).
You're likely to feel so much happier, freer and easier sharing your most personal life datastream with an AI assistant, if you can be sure its most intimate analysis is just between the two of you.
Coincidence or not, the dystopian AIs are somewhere in the cloud and work for someone else, while the utopian AIs are intimately personal to each user and work just for the user.
Sure why not? As long as no data ever leaves the device unencrypted and the encrypted data can only be decrypted by the client at the other end. Of course you'd probably have to take the app's word for it that that's actually what it's doing if you don't have the source, but that's no different from current E2E encryption offerings from WhatsApp etc.
The part I'm not sure about is whether the on-device certification that the message is "clean" couldn't be (easily) spoofed. But it would probably help curb distribution of illegal material anyway.
No, obviously not. The mental gymnastics involved here are impressive: the point of E2E encryption is to stop the service provider seeing or tampering with your messages. If they do that anyway it doesn't really matter how it's implemented. They could also just use a broken random number generator, or many other ways to implement the policies whilst still having encryption code in the product. It's the end result that matters, not the precise means of implementing it.
Phew, agreed. I mean of course the company "can" read the message. If it does, I would love to see that shown by the app upfront, so I can avoid using it.
Analysis happens on either end, not the network or servers. Of course if both ends are "cracked" this doesn't work, but the goal is to stop mass spread of disinformation. Most people won't modify their client.
> But that's not true. The message analysis could be done at either endpoint without violating privacy.
This is stupid. Lots of naked baby photos get sent in my culture (Eastern-European country) in a most non-harmful way, i.e. from parents to the kids' grand-parents or even to the parents' close friends (especially from the mother to her friends). Your supposed filter will most probably block that social-sharing process (because it will see photos of naked children => very, very bad), not realising the above mentioned context.
Truth is truth. Saying something is possible isn't the same as advocating that it be done, and it's useful to point out something is possible when people at first seem to think it is not.
Also, I was talking about messages, not baby photos, and with regard to misleading or false information, hoaxes etc that cause people to behave more dangerously to others during a pandemic. Saving lives, that sort of thing.
If it's giving the user advice that others have judged what they are retweeting to be a hoax or bad medical advice, that's not blocking, it's providing context. If they don't like it, they should be able to dial it down.
With regard to baby photos, if a network starts blocking those due to poor filtering, I would hope people switch over to another network that lets them share the photos.
> Your supposed filter will most probably block
I wasn't talking about filtering particularly, the emphasis was on providing a note to the user. Much like when Twitter attached a note to Trump's tweets.
In any case, the analysis I had in mind is not "skin tone filters" and that sort of nonsense. It's not meant to be thought police, working for someone else. It's meant to advise the users themselves to think again about some content. At least at the currently level of sophistication, that would be "we recognise this particular message or photo".
There are better and worse ways to implement it of course.
Something that reveals no information to others is not a privacy violation by definition.
(Although, something that blocks communication (which I don't think I agree with anyway) based on local analysis is an autonomy violation. But not a privacy violation.)
Something that tells you when you've just received a well documented hoax is not malware, it's probably useful, and most people will probably keep it switched on if the quality is consistently good.
By your logic, spam filtering (outgoing and incoming) is also malware, and a privacy violation. (Even though it protects people against malware, and indirectly protects privacy.)
Do you believe spam filtering is bad? I doubt it.
Yes, people ask for certain kinds of anlysis based message blocking all the time. We begged and pleaded for better spam filtering 20 years ago because the vast majority of messages were pure spam and it had made email difficult to use. It's a major reason people switched to Gmail, because other providers' spam filters weren't good enough.
I think the key feature most people would want in any kind of alerting, tagging or filtering is that it does what they want, rather than what the enemy wants, as it were. As people's preferences differ, that can only happen if it's configurable by them rather than blanket imposed. Things like ad blockers work this way - you can change the defaults if you want - and people seem to like those.
The main purpose of E2EE communication between willing participants is that the content of the communication is not checked, inspected, scanned, questioned, sampled, matched, filtered, modified, blocked, altered, or otherwise interfered with in any way whatsoever except that which is explicitly configured and consented to by a party to the conversation. (eg. anti-virus, anti-spam, group membership, etc)
That means no control of communication by the endpoint software either directly or indirectly between willing senders and recipients regardless of justification, and especially on the basis of whether or not data send from a willing sender to a willing receipient represents the "truth". The endpoint software vendor has no standing to judge that unless it's an opt-in anti-spam type feature.
It would indeed be very hard, if WhatsApp's own claim that messages are encrypted end-to-end is true. Either way (and I disagree that this should be a separate discussion) all methods of global speech control are eventually used for evil.
So first off, I think "WhatsApp" is being used in two ways here. 1) is WhatsApp the company or the central server and the other 2) is the app on someone's phone. When people say end to end they mean from the WhatsApp app on someone's phone to the other WhatsApp app on another person's phone with no way WhatsApp the company can read the message.
So what I'm saying is it's easy for the app on people's phone to filter or block sharing misinformation, because the app can see what the data is just like your eyes using the app can see what the data is. I'd also be fairly surprised if WhatsApp the app re-uploaded every single shared video, I'd say it's more likely they share a link or hash and encrypt that link in the message (but that is just a guess).
To your second point.
All methods of global speech are eventually used for evil. All methods of communication are eventually used for evil, all methods of food preparation are eventually used for evil. There are some flaws in that argument.
"It would be hard for WhatsApp to prevent people from spreading bogus videos, because they have access to the unencrypted information before and after sending."
Facebook/WhatsApp claims WhatsApp messages are "end-to-end" encypted. For example, here
They could put a tiny truth ministry module in the app and remain e2e:
"I'll be on my way home in five minutes!" [send]
"Sorry, our algorithms have detected that you were about to spread misinformation. Please contact support to reactivate your account. Premium support is available to subscribers of our membership programme"
I'm actually only half joking: while some local library code surely cannot tell truth from lies, that problem remains just as unsolved for arbitrary amounts of central effort. There's a reason truth ministries are a Bad Idea.
And when you have [posts] encrypted, there is no way to know what it is. I personally believe government should not allow those types of lies or fraud or child pornography [to be hidden with encryption like WhatsApp or Facebook Messenger].
Note the []? Was this added in by Medium? Gates as an edit after a read-and-OK-to-release? Did Gates have editorial input?
Regardless, I believe [] means 'edited afterwards for clarity'. But by whom?
Yet in this case, there was no hatred of encryption, but picked quotes where he suggested that in one case, if you have the means, the government should be aided .. eg, a murderer.
There was another article on hackernews, where many lamented how much the media just spins, takes quotes out of context, basically does whatever it wants to. I wonder, how much of this are we seeing here?
(Note, Gates could very much be for back-doored encryption, but my point is, I don't think it's a clear position due to this medium article.. where that stances was in [] and added by someone after...)
The POTUS said CNN is fake news, hundreds of times. So, as you say, people should be prevented from sharing and reading it?! There's no problem with that?!
The very first item on the USA Bill of Rights isn't important?