I'm very much against government (and corporate) spying, but as a mental gymnastic, why was this program a flop?
Assuming it didn't stop one act of terror (as this article assumed/the judge ruled), why?
We know that in the hands of corporations and political machines all of this data can be used with pinpoint accuracy. Is sussing out "who's likely going to commit an act of terror" categorically different than "who's likely to buy a massage from a local spa" or "who's likely to vote for party X"?
In a word - Too much Data, not enough analysts to wade through it. The amount of ingested data from various fire-hose feeds was (is?) staggering.
The Signal to Noise ratio makes it practically impossible for any nation state to properly go through each and every 'flagged for human analysis' entry. Metadata analysis (who was in contact with who and when) are more than sufficient for prosecution and hoovering up the loose ends (after the fact). Parallel construction neatly masks some of the more interesting stuff.
I am Pro Privacy but still keep in touch with those that deal with this sort of stuff and the latest fad is "ML/AI will solve it for us". Good luck with that because - What do you use as a training data set?
Assuming it didn't stop one act of terror (as this article assumed/the judge ruled), why?
We know that in the hands of corporations and political machines all of this data can be used with pinpoint accuracy. Is sussing out "who's likely going to commit an act of terror" categorically different than "who's likely to buy a massage from a local spa" or "who's likely to vote for party X"?