Hacker News new | past | comments | ask | show | jobs | submit login

The important aspect for the NSA's interpretation is that algorithms can look at and process the data and create metadata or synopsis information from it.

Having an intelligence system ingest this metadata and synopsis is not considered "collection".

Essentially, if it can be automated, it isn't collection. If a human gets pulled into the loop to look at data, that's when it's collected. However, a human could be shown a synopsis or an inference about an American target and this could still not be collection, as the summary information being viewed isn't considered the person's private records.

Basically a loophole in a loophole. I'll be happy to keep databases of, and run software over, our national security records. I won't collect any of it, though. I won't even look at it. I'll just get summaries of the information contained in it from my algorithms - and if I want to look at a specific document I'll punch a rubber stamp on it first.




And what will you need to show to obtain said rubber stamp? This is not secret, either:

Search for "how FISA works" here: http://www.belfercenter.org/sites/default/files/legacy/files...


Curiously, a fair amount of genetic research is done this way: the genetic info is PHI, but the covered entity holds the data and the computer capacity. The researcher just pushes an algorithm to the cluster and gets aggregate results back.


That's the idea, but in practice GA4GH is still working on the API's and protocols to make this work in an automated and containerised fashion for modern genetic data. We do often send the algorithm to the data but mostly by way of granting an account to collaborators and them sshing into a remote cluster because copying 120 terabyte datasets is no fun.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: