Hi everyone. I work on the security team at Facebook. While investigating the claims of this post, we've confirmed that Facebook doesn't use Recorded Future -- an open source aggregator of public data -- to scan any private content. That means we haven't partnered with or directed Recorded Future to scan anyone's message links.
It's hard to tell precisely what's going on based on the amount of information in the post. It's possible that another interaction, including one that could be occurring on the client machine, is consuming the URL and generating this behavior.
We'll update if any new information is discovered.
I have had a lot of difficulty in the past discussing security and privacy concerns with your team - in particular a complain about the censorship of my linking leaked Snowden and Wikileaks documents and PDF scans of Islamic Extremist magazines (non-violent/gross content).
Do you have a transparency document anywhere that you can link, so that we can understand what you as a representative can and will contribute to the conversation? Could you also provide some proof that you are a representative of Facebook security?
Hi everyone - the team here at Recorded Future looked into this and dug into our logs to confirm what happened.
Our systems followed this URL after it was posted on a public site. Our system constantly explores links published on the web. We've checked our logs and confirmed that this is what happened in this specific case. It's not related to any Facebook chat messages containing this link. Our system doesn't access that information.
> Hi Andreas - I'm not saying it was posted publicly by you or someone in the Bosnadev team. Please contact me directly (matt at recordedfuture dot com) and I will share more details with you.
You left this comment on Bosnadev's blog. Mind sharing with the rest of us?
Maybe they posted the URL privately on some other platform (ie. slack, some instant messenger, whatsapp, who knows)? And recorded future got it from there?
That's basically right. Our system observed the URL elsewhere on the web - not on a private messaging service. We've offered to share those details with Bosnadev.
RE: whether these comments really are from the RF team, we're about to post the same info on our own blog.
Search Google for the "secret" URL. Make sure you click the option to show the "omitted" search results. In the search results, look for the results that date from before the article was posted. You will find an URL on pastebin.com. Look at the timestamp. This page contains a partial HTTP log, containing not only the relative URL but also, in the referrers, the complete "secret" URL.
So, my conclusion is: in the group conducting the "secret" chat, somebody posted the HTTP log to Pastebin.com, and then, and only then, was the "secret" URL picked up by Recorded Future.
(continued from previous comment) The "secret" string to google is "/_temp/cork.png". People should be aware that once they post an HTTP log to pastebin, their "secret" URLs are not secret anymore.
Since the OP published the article implying such a grand circumstance, and commented "I assure you it was not posted publicly by any of us. Newly created URL and link c/p to fb chat.", I believe that there are still a few basic questions in order, in any case:
1. What "link generation" program was the OP using? Is it possible this program streamed its output through something like Pastebin - without the explicit knowledge of the OP - and if so, can we verify this is the case by following up with an example?
2. Can the representative from Recorded Future comment on whether or not this site, Pastebin, is being monitored?
Thanks all, for what we'd surely hope to be a trivial, if unusual, case of software being stealthy...
- Any evidence you have that what you say happened indeed happened? That is specifically the log information you report on having investigated. Publishing this will allow the original publisher and independent analysis to corroborate your story.
The more information you can publish to quell any doubts, if this is completely benign, the better for your company/Facebook/CIA.
(This also seems like a fun time to note that Witopia, a VPN provider lists its office address in a building occupied by multiple CIA subdivisions.)
An apparent theme of In-Q-Tel investments is targeting security-related markets. Most consumers and small businesses cannot implement their own security, so they need to leverage the services of a security company, like one of In-Q-Tel's many investments. In-Q-Tel understands this need, and so do other investors. Security is hot right now.
When consumers evaluate security companies, trust is becoming an increasingly important factor. After all, why purchase security services from someone you cannot trust?
In-Q-Tel is a CIA fund. The "post-Snowden era" has damaged American's trust in government intelligence agencies like the CIA. With more Snowden/Greenwald leaks pending, trust will likely continue to erode.
If Americans do not trust US intelligence agencies, why would they choose a security company funded by a US intelligence agency? One does not have to be a conspiracy theorist to agree with the logic for purchasing a security product not funded by the CIA.
Given this fact, it also seems natural to assume security companies will no longer seek investment from In-Q-Tel, given the presumably negative customer reception. For security companies, In-Q-Tel investment could become a negative signal to consumers, and therefore also to other investors.
Will Intelligence agencies have trouble investing in winning security companies? What will be the repercussions for CIA investment strategy?
I'm not sure it's always wise to trust or refuse to trust software based on its funding sources. Something not funded by In-Q-Tel can be bad, and another thing produced with In-Q-Tel investment might serve your needs well.
We need more independent verification of actual software behavior, and better ways to manage it's behavior.
Yes, agreed. However it seems more likely a company funded by In-Q-Tel would cave to government pressure sooner than another. If a government entity directly funds you, then you do what it says or you may lose the funding. If I'm a security conscious consumer I would opt to skip the unnecessary risk of signing up for a security company on government payroll.
> (This also seems like a fun time to note that Witopia, a VPN provider lists its office address in a building occupied by multiple CIA subdivisions.)
This seems the appropriate time to interject that this may not be particularly meaningful. I once worked in a building that contained offices of the IRS CI and Secret Service. The IRS were actually on my floor - seeing an armed accountant in the bathroom is a bit of a surprise.
It doesn't always mean a lot. In my case, it meant we were in a building conveniently close to highways and the local airport.
I agree, which is why I worded my comment so indirectly. Caveat lector, as they say. ;)
Office space could be cheap, or in this case datacenter space as well.
I am not accusing Witopia of being a CIA-funded VPN provider, but it seems reasonable to argue that sharing an address with the CIA is not a rational business decision for a VPN company.
I remember someone from In-Q-Tel giving a presentation to one of my college classes. He explained that the company's name came from the desire to bring industry innovation to the US intelligence agencies; they are literally putting "Q" (the innovator/gadget guy in James Bond [1]) in "Intel".
> We identify, adapt, and deliver innovative technology solutions to support the missions of the Central Intelligence Agency and broader U.S. Intelligence Community.
The question isn't whether In-Q-Tel supports the CIA. It's whether Facebook was funded by In-Q-Tel, and even the article that Wikipedia uses as a citation doesn't seem too sure how strong of a connection there is:
> The second round of funding into Facebook ($US12.7 million) came from venture capital firm Accel Partners. Its manager James Breyer was formerly chairman of the National Venture Capital Association, and served on the board with Gilman Louie, CEO of In-Q-Tel, a venture capital firm established by the Central Intelligence Agency in 1999. One of the company's key areas of expertise are in "data mining technologies".
Article starts with "I despise Facebook." Aside from that possible bias, the best it asserts is that a partner at one of Facebook's VC investors is on the board of In-Q-Tel.
> Howard Cox is an Advisory Partner of Greylock, a national venture capital firm. Greylock with committed capital of over $2 billion under management has been an active investor in enterprise software, consumer internet and healthcare.
This is the closest connection I know between In-Q-Tel and Facebook, that's why I posted that link. (note that OP said "indirectly")
The indirectly comment is referring to the CIA itself. Funding by IQT would thus be indirect versus funding by the CIA. The problem is there's no credible evidence of either, just fairly weak circumstantial stuff at best.
Common sense would indicate that it would be highly improbable for the military-industrial complex of USA to hand over something like the internet to the plebes without first thinking it through very carefully.
I would not be surprised one bit if SRI had not gamed this panopticon to the n-th degree in the 70s before Al Gore "invented" it for us.
[p.s. I can not reply to a follow-up so for the record, ARPANET was not an 'academic' exercise.]
Question: was the URL sent through Facebook chat an HTTPS URL? If it was, that would imply Facebook's involvement. But if it wasn't, it's possible that Facebook's scan was sniffed in transit and the URL was then sent over to the third party for scanning.
I also assume from the timezone in your log that your server is outside the US. It would be interesting to test this with a link to a server located inside the US, to try to determine where the interception is taking place.
You're talking about the Facebook website, which is irrelevant to this discussion. If someone sends an insecure HTTP URL through Facebook chat, then Facebook has to make an insecure HTTP connection to that URL in order to scan it, which leaks the URL, even if the chat session was using HTTPS.
agreed. Maybe the courts haven't defined 'Private' yet? I'm not a user but is the word 'Private' pretty much removed from everything? wouldnt labeling something 'Private Chat' indicate that there are no unknown parties involved?
I'm a fan of sites like: https://twofactorauth.org/ (credit to davis) that call out this kind of stuff and list competitors next to each other.
Does anyone know of one similar for calling out privacy BS?
I think the big differences here are that it's a third party scanning the IP address (though possibly one Facebook hired for security scanning, as someone else below guessed), but also that Skype chat was still being vaguely claimed as "secure" at the time, I believe. Facebook is up front about scanning chat messages for spam and they do the page screenshot thing.
Tangibly bad things would have to happen to a non-negligible amount of users. And, to be honest here, that's the threshold for me to care enough to do anything substantial.
A person's chats being read by some spooky unknown company or government agency has little-to-no tangible effect on a person's life. Until it does, people can spend their effort on problems that do have tangible effects on their lives. And I think that's reasonable. Others might say we shouldn't wait until it's too late... but there's many other problems where it's already "too late" that deserve attention.
Anyway, I think it's cheap to imply that people are just lazy (as some do) for not caring about this stuff. It's just they have other problems that deserve their attention.
It probably doesn't have a tangible effect on your life. But it would and does have an effect on: celebrities, political figures, people of non-Christian faiths, people who pissed off someone in the wrong department.
The less security that is provided to everyone, the easier it is to access content of the truly vulnerable people.
A lot of people find certain features of facebook really useful (e.g. organizing events), and there are still groups of people almost exclusively using facebook chat. Just because people are still using facebook doesn't mean they trust facebook with private data. Even my mother uses facebook, but she absolutely refuses to put anything even remotely private on facebook because she assumes facebook will snoop it.
So even if it turns out that facebook is selling off all of its data to the highest bidder people would probably still use it, unless there is an alternative. For chat we're seeing more and more protocols and apps springing up, both proprietary and open source - e.g. XMPP or the much younger/newer http://matrix.org (which I work on).
For event organization? The key thing that makes facebook useful for this is that pretty much everyone has an account. You could probably build something on top of a distributed/federated chat/data replication system (like Matrix or possibly XMPP, etc.), but I don't see any new closed app being able to get enough traction to overthrow Facebook there.
Well I am not sure about Facebook but a relevant comparison might be the British newspaper the News of the World[1]. It held an (assumed) unassailable position as one of the oldest and most widely read English language newspapers. It was shut down by the revelations that they (allegedly, and plausibly) hacked voicemails of a murdered teenager. A privacy issue.
In practice though, this involved a large international media empire shutting down a tarnished brand, then promptly relaunching an equivalent service using one of their stronger brands.
This would be broadly equivalent to Facebook killing the Whatsapp brand and letting the team go, and relaunching an enhanced FB Messenger a couple of months later...
The (then) CEO of Google Inc. loudly declared the (constitutional) legalese that the users of his company's product have "no reasonable expectation of privacy".
The Supreme Court is perfectly clear as to what rights we have when we don't have "reasonable expectation of privacy".
Finally, we are effectively coerced [in]to using these products -- unless you want to head to the woods and live the back to nature life.
You asked why the downvotes. Not sure about others, but I downvoted you because you're asking people to open their eyes to the truth while repeating things that are easily verifiable as not true.
This:
> The (then) CEO of Google Inc. loudly declared the (constitutional) legalese that the users of his company's product have "no reasonable expectation of privacy".
never happened. You're conflating different events.
One is presumably Eric Schmidt warning users that Google is subject to the Patriot Act.
The other was a defense (I assume from one of their lawyers and well after Eric Schmidt stepped down from being CEO) that when people email a gmail account, they don't have reason to believe that gmail won't process their email.
From your conclusions: first, the lawsuit was in the context of non-gmail users, not the gmail account holders themselves, so doesn't apply to "users of his company's product". Second, the judge rejected that line of argument anyways. Finally, you're taking a limited defense (they were arguing within the confines of the Wiretap Act) and somehow spinning that off into a general loss of constitutional rights that doesn't follow.
> One is presumably Eric Schmidt warning users that Google is subject to the Patriot Act.
His intent is opaque and irrelevant.
What is relevant is that it is, and has been for a few years at this point, a matter of public record and declaration that you do not have "a reasonable expectation of privacy" when using certain internet services.
That's fine; I was just responding to your assertion that it was the "(then) CEO of Google Inc" saying things.
> What is relevant is that it is, and has been for a few years at this point, a matter of public record and declaration that you do not have "a reasonable expectation of privacy" when using certain internet services.
You just skipped every single one of my points. You're taking a limited claim and turning it into histrionics. Your expectations of privacy do not work that way. Please re-read my comment above.
>Finally, we are effectively coerced to using these products
Google may still be the best search engine, but I use Facebook solely for the chat functionality. If some competitor comes along that offers the same features with the same quality but better privacy I would have no problem with switching.
(In case somebody smells a startup idea or already know a solution, I am looking for a service with good, easy to use Android, Apple and Windows Phone Apps, website for desktop use (desktop client for Windows and Linux is fine too), decent direct-message as well as group chats with consistent message ordering. I should be able to view all my messages, including history, in Apps on all my phones as well as on my desktop.)
Would you find it strenuous to tell your friends about if such a thing existed?
I've always had a thesis that I'm not sure is true, perhaps i'm not social enough, although I have my facebook friends and many contacts in my phone I really only talk to 3-8 people. If such a thing existed I don't even know if it would be too much work to switch since I would only tell that small amount of people.
But perhaps i'm too antisocial, I wonder what other people think about that.
Our whole friend group once switched to TextSecure we were all fed up with Facebook. Sadly, Facebook is a lot better than TextSecure and we switched back again.
I think if a competitor showed up that is seriously better than Facebook, every person switching could have a huge Ripple Effect. People already use a multitude of Apps/Services for communication, using one more isn't a big enough cost to invoke the network effect in my opinion.
There are a ton of chat clients. The issue is all these closed chat networks mean that even the best client can't talk to your facebook contacts, gmail chat contacts, etc (or you sort of can today in a nerfed way, but XMPP is going away soon for both).
You might be factual, but it's a conversation killer. There's nothing I can do with this post other than nitpick the details - you haven't even cited sources. HackerNews is intended to promote conversation; hence, downvotes.
I don't know how you conduction conversations in real life, but if someone says something to me and I need more details, I ask. I don't tell them to shut up. :)
This happened to me before with someone I worked with. Saw some weird hits from California, the next day tons of stuff from Russia, china, eastern European counties I have never heard of before..
Now I just don't trust giving private info over such chats.
Yeah, but this is nothing new. I run several sites whose (non-public) links are regularly scanned by Facebook after someone types them in a chat. Presumably they're doing this to scan and flag malicious websites/pages.
That seems like quite a generous interpretation. Yes, that is an ostensible reason for the scanning [1], but to imagine that that is all they are doing with the information gleaned from the scans seems a bit naive. I would posit that sender, recipient, link, time, the page's content, and the conversation's context are forever stored in a database somewhere, waiting to be fed into the data analysis/advertising program du jour at some point in the future. Who owns that database or where the information ends up is probably a trip down the rabbit hole, and the posted article is a first step into it.
The third-party does threat intelligence... I don't see how it's unreasonable to think that Facebook has hired them to perform malware/scam detection on links sent through chat. Given Facebook's policy, someone has to do it.
They could be intercepting the traffic anywhere between the author's network and Facebook though, not necessarily with any complicity on the part Facebook.
Are you really claiming that an amoral company whose sole responsibility is to produce income made a deal to produce income at the expense of user privacy? How could anyone have seen this coming?
If you trust your data to a company it will be sold. That goes for Facebook, Google, and any other major company.
Have you seen anyone go to jail for siding with the surveillance state? Laws only apply if they are enforced.
As for the ToS, I'm not sure you can say someone is bound by a set if rules they can change at any time without notice. They may have already changed the rules and simply not made the changes public.
I think the article points out the opposite, that you are being spied upon by Facebook itself and undisclosed third-parties and being told about it on facebooks' ToS.
I think users simply don't know enough to understand why their privacy is important. As such, my guess is that privacy concerns have hurt Facebook as much as they will already. This isn't the first or the largest instance if Favebook violating users' privacy.
What's more insidious at this point us that Google is only marginally better, but a lot if people have much higher opinions of Google.
When it comes down to it, trusting a company just isn't a good idea. Companies inherently aren't trustworthy. Even if the current leadership is trustworthy, companies can change hands and the data is still there. Companies get hacked, try to do what's right but compromise to stay in business, etc. The only way you can trust that a chat service isn't sharing your data is if they mathematically are prevented from doing so through cryptography.
I wonder if the same happens for Whatsapp (for which I'm still waiting on them to make a public statement and privacy policy change for the adoption of end-to-end encryption).
Could they call it end-to-end if the ends are the devices and the server (never cleartext, but they have the key)? Or, in combination with what is available through FB, is the metadata telling enough?
Nothing is unlikely when they haven't even admitted in public that they have adopted it. At the moment end-to-end encryption on Whatsapp is nothing but vaporware.
A serious question: Can someone explain to me why this is an issue?
Facebook chats are stored on the servers of a company that I have no control over, that is not funded by me, and that has repeatedly demonstrated a commitment to getting people to reveal things about themselves.
And meanwhile the U.S. government is listening to our phone calls.
Are Facebook chats read by the U.S. government? Of course they are -- even if the claims in this article are completely false.
The reason here would be due to the implication the CIA may be involved. The CIA's charter (is supposed to) specifically prohibit domestic activity.
Facebook is in PRISM and a myriad of other government programs - everything that is posted, uploaded or chatted there is tagged, filtered and alerted for activity that the US government may deem dangerous or may want to pursue for other reasons (DEA, aggregate understanding and persuasion/'engagement').
Agreed broadly that this wouldn't be completely new news (were it to be confirmed). Technical details and confirmation do help citizens to understand that surveillance/'bulk collection' is alive and well.
True. Good point. Though I think the implication here would be that the company in question is both domestic and not associated with said overseas groups. This would mean that, (without a warrant) the executive of the US government is performing searches of domestic data.
Again that isn't anything new - although still profound.
Bitmessage doesn't scale well - almost by design. I'm sure it works fine for the occasional non-realtime message, like a voice-mail box.
But if you are ok with alpha-quality software and want encrypted chat similar to IM/Facebook chat you should stick to something like Pond https://pond.imperialviolet.org/
I find it interesting that people have problem with the law-enforcing organizations spying on them, but if megacorporations whose sole purpose is to make money off its user base, that's OK.
I find it interesting that some people have a problem seeing the difference between willingly and knowingly giving data to a corporation in return for a service and a law-enforcing organization using that data without permission
Willingly and knowingly? How many people do you think know what everything Facebook, Google, Microsoft and others log and process about them? As far as I know, majority people believe that it's the omnipresent ads that make the service "free".
That's true, most people don't. Does it mean those who do know about what is happening with their information/trust should keep quiet?
May be (just an example), you might not have deep knowledge about cryptography, does it mean those who have the knowledge should not call out the problem when they find one?
No, it doesn't mean anything like that. My point is that people are resentful about government (an entity that is supposed to work for them) gathering their data while at the same time not minding private businesses mining probably even more data -and- providing those data to gods know home many third parties. It's the disproportion of how much people care that surprises me.
That's quite a generalization. I think there's been plenty of social anxiety generated by the knowledge that Google, Amazon, etc are mining our data. A lot of people have written at length about this subject.
"Like this page on Facebook". I seriously wonder about websites that target a select audience and duly notify the front companies of the intelligence community.
Come on, this shit storm is way out of proportion. If it hasn't occurred to you that using Facebook Chat is essentially like writing postcards, you're probably new to the internet game. There's a million secure alternatives out there, choose one. (Threema, Telegram, and Wickr are among the best.)
Most people - anyone clueless about computers, and only vaguely informed on the state of our intelligence community - they do not "know" what you imply. Because, we do not "know", and therefore we cannot prove it to them.
We are given bits, here and there. Who knows what bits are suppressed by which interpretations of this Patriot Act or that?
Such a laizze-faire attitude about such potential for warrantless, unbounded, infinitely intelligent surveillance (and control, which it implies) is inconsiderate of most people, to say the least. To say the most, it could be fatally reckless for too many of us.
"Come on": if a random link was crawled by a "threat intelligence agency" in an otherwise innocuous chat? That is outrageous, if true.
To be clear, I recognize "hints" that have been posted (as opposed to proof) suggesting that, in fact, something like Pastebin.com was crawled. And, for some reason, the link that was allegedly private allegedly ended up on Pastebin.
In any case, the authors of this thread were clearly not aware of such "sniffing" on their computers... to say the least.
Maybe completely innocuous, and maybe not.
But since we're on this Snowden/Citizenfour topic, let me at least be clear: I refuse to let the walls keep crumbling, if that's what we're really talking about. You can bet your ass that Benjamin Franklin was right: “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.”
I refuse to consciously and explicitly grant free reign to "Big Brother" - and reduce this to a "catch me if you can" survival of the "most evasive" - just because, you know, "national security" knows whats best for us.
Who's security exactly? Who helped lobby to put whom in what directorship of what agency that deems it fit to keep this (slightly sensitive topic to Corporation X, Y, Z) confidential?
The more you let the surveillance state grow - whatever respectable, admirable, necessary ideals it may have - the fewer and weaker your "million secure alternatives" will become.
We must not reduce control of the future of our country to a cat and mouse game. We must be tigers ready to fight for what's right for everyone. And yes, everyone should decide this together - not just a few "intelligence analysts" (or worse, their bots) in a pretty little lab somewhere...
Thanks for your reply! I agree with most of the points you raise, but why do you think secure messaging services will become weaker and fewer as the surveillance state grows? Isn't the exact opposite to expect? I think the postcard analogy holds. In a few years time, encrypted messaging might be standard and everything else will be considered inadequate due to lack of security.
I wouldn't be so quick to assume foul play. Could it be some piece of software installed on the recipients browser/machine that is sending the links off somewhere to be scanned?
At least Facebook (unlike WeChat or Whatsapp) supports XMPP, so we can independently implement an encryption layer. Only problem, as usual, is getting everyone to agree on a standard.
On April 30, 2014, we announced the deprecation of the XMPP Chat API as part of the release of Platform API v2.0. The service and API this document covers will no longer be available after April 30, 2015.
Once version 1.0 is deprecated on April 30, 2015, chat.facebook.com and the xmpp_login permission will no longer be available.
We recommend developers who have integrated with the XMPP Chat API deprecate this functionality from their apps before April 30, 2015 to avoid broken experiences.
No, the problem is that all the major players are moving towards proprietary messaging protocols.
The problem that needs to be solved is one of having a fully encrypted, distributed messaging protocol (i.e.: no central server) with apps that are easy enough that gramma can use.
It's a shame that XMPP never really took off; even more so as a lot of people seem to be taking it as proof that no distributed chat system will ever take off. I think there are quite a lot of reasons end users would really like distributed chat, one of which is very much security. We need to learn why XMPP didn't take off and either a) fix it, or b) learn and move on.
(Which is why I'm part of a team building http://matrix.org/, check it out!)
However, I don't necessarily think we're ever going to get to a world where everything is end-to-end encrypted. There will always be trade offs involved when using encryption, and sometimes the trade off aren't worth it (a stupid example being public group chats).
It would be nice, however, if chat apps and protocols in general made it clearer what level of security is involved. Just like the average user is slowly getting to grips with HTTP vs HTTPS (thanks in part to the nice and simple padlock icon on most browsers), there's no reason that users can't be made aware of the security of a given conversation and medium.
Facebook, for all its woes, is still easy/useful when trying to organize events, and I really don't care that much if they are spying on that. On the other hand I don't really use facebook messenger since I do care about those conversations.
We need to empower average users to be able to make those same informed choices on who they trust with what.
I'm wondering if this trend of moving to proprietary protocols is because of business or because of technology. And it's not just messaging either.
File storage are moving from standard calls like fopen() to using the Dropbox API or Google Drive API.
Phone calls are moving from device-agnostic copper wire to Skype or Google Hangouts.
Taxis are moving from wave your hand to proprietary Uber API to proprietary Lyft API.
But then again, back in the day, competing products in a single market (e.g. Zip Drives vs. LS-120) were still sufficiently similar that it was possible to use a single API to access both (e.g. fopen()). Nowadays, competing products in the same market (e.g. Dropbox vs. Google Drive) are so different in implementation and feature set that it's inherently difficult to come up with a single extensible standard that covers both.
So are we now in an age where everything will remain proprietary for a few decades due to technological limitations?
It's so hard not to give a skeptics answer to this. When one hears FB, et al, talking about how many users they have, one can't help but think this is all business.
On April 30, 2014, we announced the deprecation of the XMPP Chat API as part of
the release of Platform API v2.0. The service and API this document covers will
no longer be available after April 30, 2015.
This particular article is talking about Facebook Messenger chats being scanned. The link you provided is talking about public Facebook comments that anyone can see, and in fact the kid was reported by a third party, not some shady surveillance company.
This article is terrible, but the original article has been pulled off the local news site. It wasn't on someone's Wall, it was in a message to his friend.
Your article doesn't say anything about chats being scanned.
> Jack Carter said a woman from Canada saw the posting, did a Google search for his addresses and when she noticed he lived near a school, called the police.
I don't think that was a chat being scanned. The article even states that a woman in Canada saw the post and called the police, it was probably just on his wall.
Someone on the talk page brought up the same point. "The only discernible connection is that both had In-Q-Tel as an early investor, as did numerous other companies (none of which are listed as related, nor should be they be)."
It was pointed out above, but the In-Q-Tel involvement in Facebook is seriously tenuous;
The second round of funding into Facebook ($US12.7
million) came from venture capital firm Accel Partners.
Its manager James Breyer was formerly chairman of
the National Venture Capital Association, and served
on the board with Gilman Louie, CEO of In-Q-Tel, a
venture capital firm established by the Central
Intelligence Agency in 1999. One of the company's
key areas of expertise are in "data mining
technologies".
So the evidence for an investment is that Facebook's lead investor for their Series A was the Chairman of the NVCA, of which the CEO of In-Q-Tel was also a member (along with 7 other prominent VCs). That's it.. an investor was on a board with the CEO of In-Q-Tel.
None of Facebook's SEC filings ever showed a single dollar from In-Q-Tel and they're not shy about publicizing what they've invested in:
Big data analytics is their bread and butter – if they had a way to analyze Facebook messages for patterns and predictively useful metrics, I'm sure they would.
They are mostly about exploring the data yes, but they also do some data mining in the content in order to make connections. They attempt to pull out names and link them together and things along those lines.
I think they would be far more interested in mining all of the friendships to do the network analysis than the content of the chats.
It's hard to tell precisely what's going on based on the amount of information in the post. It's possible that another interaction, including one that could be occurring on the client machine, is consuming the URL and generating this behavior.
We'll update if any new information is discovered.