Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> person-based, which tracks and creates a list of “high-risk” individuals by combining a person's criminal history with an analysis of their social network.

There is no way the software actually achieves this. If it did we would have already heard about a well-off target identified by the software, a target with the means to mount a loud defense against such tracking.

And it cannot be that the software accurately returns such results but LE simply ignores them if the target seems to be too powerful. In that case we would have already heard from a leaker about a murder from a perpetrator that such a system correctly identified but LE neglected to follow up on. (Think of the value to the company of such evidence!)

No, I'm guessing this system gives LE cover for the same flawed systems they've historically used to target and harass people who don't have the means to loudly mount a defense. Just take the same process that a court has ruled against as unconstitutional, use its data as input to your ML algo, and voila! You've probably got at at least one more decade before society figures out you're just appending the words "through a sophisticated AI system" to your outlawed policing.



Why would we have heard about any of the failures? Part of the 'blue wall' is intended to hide policing failures from the general populace. Particularly when

* in the case of targeting a well off person who they decide to further investigate, they can simply use it as reason to look harder at someone, use parallel construction to build the case that the defense sees, and simply drop any cases where the defense happens to get too close. This is what they do with stingrays.

* in the case of targeting a well off person who they decide not to investigate, they can simply tell themselves that the person is a paragon of society and it must be one of the failures of the technology. Like when a boy escaped Jeffrey Dahmer's house heavily drugged, running down the street, bleeding out of his anus, begging anyone who would listen to not let Jeffrey take him back, the police got involved and gave him back since Jeffrey was such a paragon of the town. That cop later became head of the police union.


>>There is no way the software actually achieves this. If it did we would have already heard about a well-off target identified by the software, a target with the means to mount a loud defense against such tracking.

I mean it's more likely that the system is going to informing you about likely targets for certain crime types (less murder which is frequently singular and more drug dealing since repeated). It's going to help create lists of suspects to pursue on a matter. If one of them is also a senator's son or a CEO, you'd likely go in softer than Joe Poorperson


> I'm guessing this system gives LE cover for the same flawed systems they've historically used to target and harass people who don't have the means to loudly mount a defense.

This is exactly what happens. If we look at the evidence[1], law enforcement preys upon the poor and minorities who are not even committing crimes, but their predictive policing systems target them for harassment anyway.

[1] https://projects.tampabay.com/projects/2020/investigations/p...


Such software exists, it is as bad as you think it is and it produced well off targets that loudly complained. See the nsa skynet program that used map reduce on pakistans telecommunication metadata to identify possible terrorists and promptly identified Al Jazeera as a terror cell. As you correctly guessed LE did not go after them and did not bomb their office because, luckily, someone cross checked the automatons results. And then we did hear about it from a leaker.

There is far less known about such systems in police and fbi but what i read so far about the fusion centers that is, if not already achieved, at least a goal.


Do you have a citation for a story about the Al Jazeera example you gave?

> There is far less known about such systems in police and fbi

Yeah, I should have specified that's what I was speculating about.


- https://arstechnica.com/information-technology/2016/02/the-n...

> However, as The Intercept's exposé last year made clear, the highest rated target [who travelled to Peshawar and Lahore] according to this machine learning program was Ahmad Zaidan, Al-Jazeera's long-time bureau chief in Islamabad.

-- https://theintercept.com/2015/05/08/u-s-government-designate... -- https://theintercept.com/document/2015/05/08/skynet-applying...

ah my bad i misremembered: it was not the bureau in Islamabad, but the bureau chief from Islamabad and the system is also bit more intelligent and not only evaluates who calls who but also location data and suspicious pattern changes like sim swapping and buying second hand.


[flagged]


> And as an Iraqi, I wish they had the balls to bomb their headquarters.

Please don't. It breaks the rules at https://news.ycombinator.com/newsguidelines.html badly, and it also badly discredits any good information in the rest of your comment.


There is a misunderstanding. The skynet system did not flag them for any of the reasons you would flag them, it is not intelligent, it simply runs map reduce on "who talks to whom" data, with some scores set for some of the numbers. By nature of the algorithm it tends to mark journalists and lawyers as high value targets, because their phones were in contact with scored phones or low value phones that were in contact with such scored phones.

Point is: if such a system would be used against other populations, it would again flag journalists and lawyers as high value.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: