I don’t know mate, even if there is only the potential for a dangerous crime, we need to keep these criminals off our communication infrastructure until we know for certain. Perhaps a better idea is a public/private partnership between all the big search engines, social media companies and police organizations who want to join and can pay dues.
Information sharing between members of the federation would reduce crime and spread of violent content. For example, when Google links a visitor to a harmful website, they can ping the local police with basic metadata about that user (could reuse the code from GDPR export). Google can lend its AI to categorize the alerts so local police know to be on the lookout for e.g. support for “The Big Lie” or local militia groups.
This way, the officers can stay aware of any potential threats to the safety of the children in their town, but maintain the ability to act proportionally and in context. Continuing with the example of a violent Google search, the alert might be a “P0”, but the police live in the same community as the suspect. If they’re not familiar with the data subject yet, they can use the alert to get a warrant for more information from Google. Or maybe they know the alert is a false alarm because the suspect is actually a government worker researching misinformation networks, so they instruct the AI to ignore alerts like it in the future.
We need to be on alert. With the proliferation of encryption, protecting citizens from harmful and increasingly dangerous information has never been more critical.
Would you still hold this opinion if all your personal computing devices are held for multiple weeks based on any arbitrary allegation and the record of your false arrest is available to all future employers ?
Information sharing between members of the federation would reduce crime and spread of violent content. For example, when Google links a visitor to a harmful website, they can ping the local police with basic metadata about that user (could reuse the code from GDPR export). Google can lend its AI to categorize the alerts so local police know to be on the lookout for e.g. support for “The Big Lie” or local militia groups.
This way, the officers can stay aware of any potential threats to the safety of the children in their town, but maintain the ability to act proportionally and in context. Continuing with the example of a violent Google search, the alert might be a “P0”, but the police live in the same community as the suspect. If they’re not familiar with the data subject yet, they can use the alert to get a warrant for more information from Google. Or maybe they know the alert is a false alarm because the suspect is actually a government worker researching misinformation networks, so they instruct the AI to ignore alerts like it in the future.
We need to be on alert. With the proliferation of encryption, protecting citizens from harmful and increasingly dangerous information has never been more critical.