Hacker News new | past | comments | ask | show | jobs | submit login
How likely is the NSA PRISM program to catch a terrorist? (bayesianbiologist.com)
181 points by johndcook on June 7, 2013 | hide | past | favorite | 61 comments



>How likely is the NSA PRISM program to catch a terrorist?

A better question, which involves historical understanding about the issue rather than going directly to the maths and taking the framing of the thing as granted:

How likely is the NSA PRISM program even cares about catching terrorists?

Sure, they wouldn't say no if it did. It would even justify the program to the eyes of the public.

But, thing is, similar programs have been going on for centuries in all governments. They care only about two things: spying on third countries (enemies, competitors, etc, things that effect commerce and military) and spying on their own citizens (population control, etc). Those programs have roots aeons before 9/11, and aeons before "terrorism" was any real concern of the government.


The NSA isn't stupid. They know that screening for absurdly low incident things is an exercise in futility. I draw the same conclusion as you because of this: they don't care.

...well, not about terrorists anyway. If you are interested in flagging people that make up a much larger portion of the population then such systems are very practical. A P(flag|pot head)=.99 and P(flag|sober)=.01 would make for a distressingly effective system for example.


I don't think it's even really about flagging anyone. The objective is more likely to be good old-fashioned careerism than anything else. These guys are in the business of "screening for terrorists" (apparently), so they are unlikely to want to see less terrorists in the world. That would put them out of a job. No, much better to expand the definition of terrorist, and expand surveilance so they have plenty of work to keep busy with.


But at least they can say they are doing something. What they care about is saying that they care.

It's like the satirical 2008 SwiftKids for Truth ad allegedly in support of Guiliani's Presidential Campaign:

"It's time someone stood up and said 9/11... a lot"


This shouldn't even be a question. Are we going to ask how likely is that torture is going to work, too?

We shouldn't allow broad surveillance of everything you ever do or say online, and we shouldn't accept torture as a means of interrogation either. This is simply deciding about the kind of society we want to be.

Terrorists can be dealt with in other ways, too, and one of them would be trying to avoid creating blowback and radicalizing future generations of terrorists because of current actions.


Maybe this makes me a bad person, but I'd ask how effective torture would be. If torturing, say, five people a year for one day each, prevented all deaths from war and terrorism for the whole year? Make that deal any day.

To me the problem with torture isn't that it's "bad". It's that it doesn't serve a useful purpose and serves many counter-purposes. If it wasn't so clearly useless, I'd have to think a lot more about whether I'd support it.


> If torturing, say, five people a year for one day each, prevented all deaths from war and terrorism for the whole year? Make that deal any day.

Ok, we'll start with you.

This had better work!


I'm not really sure what you're getting at. I acknowledged efficacy in my comment. Individual reluctance to be harmed has almost nothing to do with the calculation of whether it's overall a better world with the policy or without.


It's only a better world from the perspective of those that are not being tortured.

The fact that by massive voting imbalance those few being tortured would have 0 chance of stopping it does not begin to describe the injustices inherent in your proposed scheme.

Ethics and principles are hard to reconcile with statistics and calculations, by suggesting that there is a greater good for which you will discard your principles in a heartbeat you are devaluing everything.

The question then becomes where do you draw the line?

A few individuals? A few tens of individuals? A few hundred? A few thousand? As long as it is one person less than would be killed otherwise?

Best not to make that first step and stick to 'torture is bad, no matter what the upside'.


I don't really believe in ethics or principles. Those are just words people use to dress up their preferences about how the world should work.

Where do you draw the line therefore zero is a standard approach from the nays in almost any policy debate, but it is easily defeated. I draw the line somewhere. Not sure where. I'd have to figure it out if I ever heard evidence that Convinced me torture is useful.


If torture were useful and ethics don't exist, then why would you draw the line anywhere other than "one less victim"?


Because of a thing called the discount rate. For future or uncertain outcomes, one rationally discounts the amount they are willing to pay right now, with 100% certainty, against the future and/or uncertain outcome.

There's a possibility (IMO, a certainty) that torture isn't 100% effective, and there's some number greater than 100 of deaths that I'd need to avoid a year from now to torture 100 people today. (some of those 100 might die anyway before the terror event; we might thwart the terror event via some other means; the terror threat may itself disappear [death of those terrorists or other means])

It's just another case of "pay me $1MM today and I promise to pay you back exactly $1MM in 20 years"...


One less victim of what? Torture, or the thing torture is preventing? Anyway, the answer is trivial: you'd decide the same way you decide anything else with a marginal cost and a marginal benefit. If the country has to torture one person for a day to save a thousand people fifty quality-adjusted life-years each, it seems like an obvious win. OTOH, if it's five million people for one QALR, we've wasted far more years in torture than we've saved, and we'd have drawn the line somewhere lower.


So you're saying you have no character. Thanks for admitting this on a job site...


Oh, believe me, moral expressivists are very much employable as long as their preferences align with society's. This tends to be the case because of shared neurophysiology and socialization.


How is your argument different from "some innocent people might get lifetime jail sentences, so we must give out no long sentences"?


Well, personally I consider jail sentences a form or torture and would like to see the prison system abolished. So no inconsistency for me. :)


> Maybe this makes me a bad person, but I'd ask how effective torture would be.

Effective at getting information? Probably. As torture is primarily used for interrogation, torturers would stop torturing if it didn't work.

But I guess it's a bit like bombing a village where you have seen Talibans two hours ago. Sometimes you will kill the Talibans. Sometimes you will kill Talibans and civilians. Many times, you'll just kill civilians. For instance, during the Algerian Independence War, French intelligence intoxicated the FLN by making them believe they had been infiltrated (operation KJ-27, January 1958). The FLN commander in the sector tortured the suspects, which promptly gave names of people they knew, which in turn got tortured, and so on. This resulted in the elimination of (at least) hundreds of combatants, most of whom were actually loyal to the FLN.


"torturers would stop torturing if it didn't work"

I don't know if that follows

that argument can be used to defend the logic of any illogical act

specifically with torture, there are cultural pressures to "do everything possible" that override considerabtions on whether it actually works


> that argument can be used to defend the logic of any illogical act

You can find historical examples of torture working for the goal of acquiring information. For instance, the battle of Algiers is well-documented. Any asymmetrical conflict has involved torture.

Obviously, any perceived benefit on the ground is counterbalanced by the loss of moral high ground, worsened perception by the local population and occasionally, international condemnation.


The original purpose of torture was to get people to admit to things the government needed them to admit to. For example, the Templars were taken down by the king of France by torturing them until they "admitted" committing heresy. That allowed him to execute them, and avoid paying the huge sum of money they were owed.

Getting information, I'd say not very effective. The issue is, what if the person really doesn't know? How do you tell the difference? You listed some examples, but how did you confirm that this information was true and not just bullshit these people were saying to make the torture stop? In the end, weren't the "loyalists" just killed anyway? Maybe they weren't actually loyal so they were simply killed and logged as loyalists. Since the main "evidence" is testimony under extreme duress there's no way to know for sure.


That's the point. Torture can cause collateral damage. These people were not guilty of what they were accused of. But at the same time, you can find out a number of examples of torture giving results.


But again, for all the examples you can name of "giving results" how many can you prove gave accurate results? And if you can prove the torture produced accurate results that means you have a reliable outside verification mechanism, at which point the question becomes: why didn't you just use that mechanism instead of torture?


Following the arrest, torture (often followed by extra-judicial executions) of numerous FLN members in Algiers and civilian collaborators, the FLN organization in the area was destroyed, and its bombing campaign ended.


1. You are making a wrong assumption. Torture won't prevent terrorism. Instead, it'll just make the prisoner friends more aggressive and looking for revenge.

2. To me the problem with torture isn't that it's "bad"

Hmmm, have you tried losing an eye? That's should be quite bad.


Ugh. What does your comment add to the conversation. Half of it is a reiteration of my comment re efficacy. The other half is a willful misreading of what I meant when I said "bad". Its like you are so eager to get in there and disagree with me that you neither read my comment to the end nor stopped to think even for a second about whether your response was rational.


I agree with your main point in this rebuttal, but I think you should have worded it in a way that would educate him as to why his interpretation was egregious without coming out so bitterly. It would raise the odds of your message actually being heeded. I understand that when people respond ignorantly, it's incredibly frustrating at times. However, they're more likely to change if you educate patiently instead of reprimanding.


But what is the point of your thought experiment? What I infer is that you don't support torture because it is not effective. You agree that it is bad, but that is not relevant to whether it should be used. What I infer is that you are an advocate of a crude form of utilitarianism. Some other morally bad activity that is actually effective in saving more lives than it destroys is something you can get behind.

This type of reasoning is used in all sorts of military actions, like drone strikes.

Armong the many problems with it: in practice it never works out that even a simple calculation of whatever action saves net lives should be pursued. Instead, policy makers operate with wildly different valuations of individual lives. Roughly, the ration of the value of one american life to one foreign national of a Muslim persuasion seems to be about 100:1. This leads to very steep slippery slopes and resulting atrocities.

That's just one problem with your views, which are thoughtless and crude.


The point of my thought experiment is the op was saying that there are some thing we don't even have to think about like torture and the surveillance we are discussing. For the surveillance, it seems much more reasonable to believe it might actually help.


I disagree. Using evil tactics is a "code smell" if you will. Even thinking about using such tactics for one second is a red flag that something has gone wrong. Why would you need to spy on your own citizens? Because they're upset. Would it really be better to oppress them than figure out why they're upset and address it?


The problem with this type of thinking is that it's focused on the small picture.

If you want to "solve" terrorism, first of all, you stop all the bullying, controlling, exploitative, and to summarize trouble-making, foreign policies.

In spite of what the USA agenda and propaganda dictates, terrorists attack for a logical reason (of course, this doesn't mean a reasonable one), not because they wake up one morning with the intention of bombing the population of a given country.

To answer your question more directly though: torture and grey-legal area practices are ineffective because they are already put in practice. There's plenty of documentaries about this, and by the way, a short time ago has been posted a link to the Guantanamo prison practices here in HN.


How far will you go with this? What if we discovered that raping the parent/child of the suspect in front of them could save even more lives?

I mean, you make a good point that it torture doesn't work (well, it actually does work but only for it's original purpose: to get people to claim they did something that we need them to claim they did, even if they didn't), but personally I don't want to know if it works or not because it's not valid no matter how effective.


Though I understand your point, I don't share it. There are thing that are plain bad. For example, applying death penalty to every criminal would be 'good' for the society: less criminals and you'd waste less money in jails. But we all know that killing people like that is not morally good.


Before we begin this scheme, you should have yourself and/or your family waterboarded for a few hours and see if your opinion changes.

Edit: I mean, even if it were useful, is is acceptable? Probably not.


I'm not sure what my emotional reluctance to be harmed has to do with whether this (in the hypothetical world I've constructed) is a net boon for society.


> Edit: I mean, even if it were useful, is is acceptable? Probably not.

Which would be a quite interesting level of emotional hypocrisy, given that we tolerate far greater damage to far larger number of individuals on a regular basis in the name of security.

Consider the number of civilian dead from the Iraq and Afghanistan wars, for example.


Torture results in more terrorism. You would not be stopping it using these methods. Instead, you would be engaged in it.


But consider that "it's dystopian and it doesn't even work" makes a strictly stronger argument. Pointing out that it won't work has better results than "those who would trade freedom for security..." on stodgy old republicans like my father (anecdata sample size 1).


An interesting application of Bayesian reasoning to the problem of screening for low-base-rate phenomena. (And terrorist criminal activity is a lower-base-rate phenomenon in the United States, so far, than prostate cancer or other dangers that are screened for.) The mathematics, of course, is exquisitely sensitive to exactly how sensitive and specific a screening program is. (By the way, so far news reports are saying that the NSA program that is all over the front page of Hacker News, deservedly so in my opinion, is not a screening program but a data collection program, with analysis of the collected data triggered only by other kinds of law enforcement evidence-gathering. We'll see what further reporting says about that issue.)


There is only one reason for collecting this data, to analyze it... and the biggest obstacle is the gathering of that data.


Well I guess technically there is this pesky thing getting in the way called the Constitution. But they re-defined some terms, moved some things around, decided that 'search' probably occurs when a human looks at the data (LOL, founding fathers didn't know about databases ;-). So it is not just a technical challenge, given their budgets they can solve the technical side it seems.

Imagine also (and I've mentioned this before) there is no statute of limitations on historical data in case of warrant/nsl, so if warrant is every issued (and we know they are never denied) they can get _all_ your data from years and years back. Emails you sent 10 years ago to secret lovers, jokes about punching the president in the face when you were 13 years old, stuff like that. Make no mistake they will find stuff to blackmail, jail, or scare anyone into anything if they dig hard and long enough into everyone's past.

I suspect more of the people involved in this will decide to do what Manning did. They see the waste, the lies, wars on terror, wars on drugs. If anything, NSA and other XYZ agencies, love to hire patriotic people, and a fraction of a percent of them maybe still are and will say "fuck it, everyone needs to know about this".


Many of us with some (Bayesian) statistics/game-theory backgrounds are familiar with these kinds of puzzles. There are many (many) such anecdotes littered through math textbooks (HIV +/- testing comes to mind) as well as some real-life scenarios (UC Berkeley hiring practices and women comes to mind -- 1970s case).

HOWEVER, let me just say that two metrics in this little puzzle are simply wrong (or at the very least unfair):

P(+ | bad guy) is simply a LOT larger than 0.99 if you want to play the game correctly. The + comes from a POSITIVE outcome -- that is, a terrorist attack is thwarted and someone is thrown in jail. Some noise in the system (i.e. pizza orders) does NOT skew the positives down. After all, discerning between pizza orders and terrorist activity is part of the algorithmic process -- the automated system may flag both cases, but the buck doesn't stop there (or anywhere close).

P(+ | good guy), again, suffers from the same problems as P(+ | bad guy). Unless there is evidence of 1% of the people being monitored being thrown in jail for terrorism charges, that number is a lot (lot) less than 1%.

There have been plenty of bogus terrorism charges (see Guantanamo) and I think there may be something here. But if we want to play this game correctly, we need to be careful. To do this simulation, we need numbers that we will simply never have access to: e.g. how many terrorist threats did PRISM avert?


You are conflating the chance of being singled out of by PRISM with the final outcome of the process involving its use, and the benefits vs. damage done.

He's not measuring, or even saying anything about, the latter.

The point he is making is that to get any benefits out of PRISM, no matter how you value those benefits, you'll need to do a massive amount of further investigation to be able to identify an actual terrorist (unless you were to take drastic action; e.g. imprisoning everyone that gets "reported" - but the high number of reports to bad guys also means that becomes "impossible")


This is an incorrect question. Anti-terrorism is not the only thing the NSA is interested in.

Better question: How likely is the NSA PRISM program to provide intelligence useful to the NSA's mission?


If you're going to pose that question, you better pose the other question: is NSA's mission legal/moral?


I wouldn't conflate legal with moral. Those are two separate questions.


I was trying to save space. That was actually two separate questions.


This is a very well known problem, but one that no politician on this planet seems to understand.

Of course, as others have pointed out, the problem is not the likelihood of getting a bad guy. The problem is that the means are unacceptable, no matter the goal.


Is that what it's for? I thought it'd be a big retroactive database; identify someone we want to know about now, and then look back through the last year's records to see what she's been doing.


I wonder if the intense scrutiny these programs are coming under increases the likelihood of catching the next attempted terrorist attack by using them. (if you catch my cynical drift)


So? 1/10,100 isn't bad at all. Suppose you have many independent tests, each creating watchlists. Suddenly you have the intersection of individuals hitting each of these, and the number of "interesting" individuals narrows. This is not a univariate problem; they're likely correlating everything to everything, and have adaptive snooping methods based on how high you are ranked a risk. Once you're high enough, field offices know about you and will keep a closer eye. It's really not a needle in a haystack (and as a privacy nerd, it pains me to say this, but I bet it is working).


I think what his data shows is that out of 10,000 terrorists, it might help catch one.

Remember they've been doing this for more than a decade, and it still didn't help them with the Boston bombers. They even had the guy in custody before. Still didn't help. The abuse of surveillance power is not effective, even if it was legal/constitutional.


No, his calculation shows P(baddy|flag)=1/10,000. In other words, given that the system flags somebody, there is only a 1/10,000 chance that this person was correctly flagged.

This is given the absurdly high P(flag|baddy) of .99 (a terrorist raises a flag 99% of the time... good luck with that.) and a P(flag|innocent person) of .01 (meaning that in the US an absolutely staggering 3.19 million people would be wrongfully flagged by this system). Then there is P(baddy)=1/1,000,000, which implies that we believe there are 300+ terrorists currently in the US that desperately need to be caught. What the hell have those 300+ terrorists been doing in the past few years? Wishing they could only find guns the the great US of A? 1/10,000,000 is a far more reasonable number, though even 1/100,000,000 is probably being generous.

1/10,000 is absolutely abysmal. When you start punching in more realistic numbers though the result becomes much worse and you rapidly approach a situation where you would have more luck flagging random US citizens as uncaught criminals than you will with using this system to identify terrorists. If you permit those sort of absurd odds, then even hiring "forensic psychics" to investigate murders in small towns starts to make sense...

This sort of screening flat out does not work for something as rare as terrorists. It only works when you start bumping your P(baddy) way up (dropping P(flag|innocent person) is not practical, and raising P(flag|baddy) to even more absurdly high levels only gets you marginal improvements). Replace 'terrorist' with 'person of interest to the DEA' or 'tax cheat' and suddenly this sort of screening starts to look a hell of a lot more practical.... hmm....


Wait, media were talking about Prism before? You have no idea the effect they had on terrorism. We can't tell with the information we have currently. We don't even know which information is true. Maybe they saved ton of terrorist attack, maybe they got multiple criminal or killers with that, but we won't know because they never said it was with the help of Prism.

All we know is that this project cost 20 millions, they had access to informations from a dozen companies, which is easily multiples hundred terabytes, probably even petabytes. I doubt 20 millions is enough to do anything useful with these informations. However can we even trust theses slides?...


Why on earth would you assume it's working? This is immoral and wrong, it's up to the government to show that it's buying us something (but I'd be against it even then). If they don't show that we should assume it hasn't stopped even one crime.


It's clearly not working. Nothing anyone is doing is working. Look at the cases where the US government actually "catches" a terrorist; they're all clear-cut entrapment cases. If these clowns are so effective, why would they hide the real successes and show us the clearly illegal ones?


It's very simple: there would be mass panic if they showed how successful they've been. If they take down a terrorist plot every 8 months, this means that there'd be a continual cycle of racial profiling and panic if made public. Look how crazy and destructive the internet got after the Boston bombing. If the US government publicized successful terrorist takedowns, do you honestly think it would be any different?

One can't observe (as a US citizen) a lack of successes and estimate whether the program is working or not, because that information would be intentionally hidden. It's an unknown as to whether (1) there are more plots than we know of and (2) the program is working in capturing and preventing these plots, and they remain hidden to us after.


>It's very simple: there would be mass panic if they showed how successful they've been.

We don't know that. But I suspect if they really were stopping lots of plots people might question why there are people so desperate to kill us. Nonsense like "hate us for our freedom" isn't going to hold up long if we are constantly getting attack.

Having said that, I don't believe this is the case. Based on what the government has shown (entrapment cases) I'm sure they'd blast it out on every station if they managed to stop a real terrorist plot.

>One can't observe (as a US citizen) a lack of successes and estimate whether the program is working or not, because that information would be intentionally hidden.

And one can't observe if my anti-tiger rock is actually protecting me from being killed by tigers. But I'd still call you a moron if you paid billions for my rock and gave up a ton of your freedoms to have it.


Well, it's been running for a while, how many has it caught? Oh, zero? Yea, that's what I expected.


> How likely is the NSA PRISM program to catch a terrorist?

Chechen Pressure Cooker Bombers: 1 NSA: 0


"So you're saying there's a chance..."

Haven't read it, but couldn't resist as it's an obvious conclusion.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: