The game Watch Dogs 2 is actually quite on target regarding this. In the game, the TPTB puts a non-scientific bias into predictive algorithms for certain neighbourhoods making the population of those areas pay more in insurance and be targeted by police a lot more often.
It's a dangerous path to take as suddenly you can be labeled a criminal just for living in the wrong place or even worse if the police put their bias into the algorithms.
History would be repeating itself. This was one of the factors in the burning down of much of the south Bronx in the 1970s. The city was broke and asked RAND Corporation to scientifically determine the least necessary firehouses to close.
Their methods were in retrospect biased and just plain inaccurate, as firehouses and police precincts in poor neighborhoods were closed just as crime and fires were spiking, creating some sick sort of positive feedback loop. There are zip codes in the south Bronx that lost 90%+ of their housing. It's incredible.
Beware of those who would use the word "science" to justify their preferences without explaining in detail.
The systems also encode in existing discrimination.
For example, women are less likely to be investigated for a crime, charged with a crime, convicted of a crime, and receive lower sentences when convicted. In some cases (especially involving sex crimes) there is explicit sex based bias written into the law itself. This means a lot of data on who commits crimes has a strong sexist bias, so any automated algorithms have a strong possibility of reinforcing the existing bias. The same will happen for race, class, and many other factors.
There are also cases where society wants an incorrect bias put into place. For example, parole risk assessment software vastly underrates the threat of certain classes of criminals compared to what society and police think it should because there are major myths about rehabilitation and recidivism that are as popular as they are wrong.
Perhaps the worst part is a total lack of transparency in the existing algorithms that determine risk. With enough data you can reverse engineer it, but that doesn't give the same impact as seeing the rules themselves. For example, one parole group I developed software for had a risk rating that appeared to automatically rate women a risk level lower for the same crime. Imagine if the same was done based on race.
"...Imagine if the same was done based on race...."
Well, the same is done based on race, they just don't need to write that into the software.
The point I'm trying to make is that it's going to be impossible to eliminate all bias in the system, but people are justified in their aversion to systems that automate bias into our system. Palantir certainly has the potential to do so, and the total lack of transparency definitely doesn't agitate in its favor.
While there are many parole officers who use race to make unfair judgments, this is considered wrong by the system and work is taken to stop it (one of the reasons systems like the one I mentioned was being adopted). Compare this to the system I was talking about where it was recognized official policy and considered good, right, and just. Imagine if some police department came out and openly admitting that their official policy is to use race to punish someone harsher for the exact same crime.
Is "network analysis" a better term for what this article describes? Seriously, if taking a deep, hard look at connections among alleged drug dealers
and gang members is bad, what would be better?
I think the issue is whether the improvements in both deep analysis and real-time situational awareness constitute an infringement on personal liberty.
My understanding of the Palantir law enforcement use case is that, for example, a gang unit police cruiser going down the street in a bad neighborhood can pick up data from its license plate readers, and be automatically alerted if any passing vehicles are associated with certain parameters.
The plain-vanilla tech solution for law enforcement would be something along the lines of: throw a notification if any of the passing vehicles is registered to someone with outstanding warrants.
Palantir takes it a step further and throws a notification if, say, the vehicle is registered to a known associate of a gang member who is wanted for questioning. This network understanding has to be built on the backend, both by integrating existing data sources and by doing work directly on the Palantir system.
Eventually the cops might start following the vehicle, and pull the driver over for a nominal traffic infraction. At that point, the officer is going to use their Palantir app within their department-issued mobile phone to assist in the discussion with the driver -- for example, if the driver coughs up any names, the officer can type them in to see what comes up in the system.
I'd think that Palantir's defense of this would be something along the lines of, "everyone in this story previously had the right to see every bit of information the system provided, we just made it more efficient". The counterargument would be something along the lines of, the state having such detailed understanding of its citizens' personal lives is ipso facto tyranny.
The ability of the government to temporarily detain and question essentially anyone at any time through traffic stops for minor violations seems like a bigger problem. It seems weird to try and restrain abuse of that power by keeping the police under-informed rather than just eliminating that power directly.
Why do you assume this is informing police? Imagine police see your license plate and the Palantir app shows up and says "crime risk score: 83" and then the police manufacture a reason to pull you over. Are the police more informed, or are they just working for Palantir now?
Limiting police power is a tough political goal. Right now police can literally shoot you dozens of times with their personal assault rifle that has "fuck you" engraved on it while you are lying on the ground with your hands on your head begging for your life, then get caught planting a gun on your body and face zero repercussions. And in discussions about this a huge number of people support the cops - "you never know, he could have been armed, we can't ever suggest a cop behaved badly because that would make other cops less willing to kill at a moment's notice" are real arguments people put forward. If you want to stop them from being able to pull people over for no reason so easily that sounds great but that is a HUGE undertaking.
There are defenses against this, but they're not reasonably in grasp for most people. If I start having a police officer interrogate me about something outside of a traffic stop:
1. I know enough to stop answering questions
2. I have a lawyer I can call and ask to get involved
Both 1 and 2 are, however, not choices for the vast majority of people.
Your two points are really good. Before you're under arrest, you may want to try #1, but what may ensue is failure to obey orders, or resistance to arrest, or disorderly conduct. Don't ask me why or how, it just kind of happens and before you know it you're under arrest. Now that you're under arrest, your 5 minute traffic stop transformed itself into an ordeal. For some people it's worth it, for others they just want to avoid the hassle, and you can't blame them for not asserting their rights.
Cameras in your vehicle and handing your attorney's business card with your driver's license are good deterrents that I highly recommend. Everyone has a plan until they get punched in the mouth.
Your point here is correct. A very common tactic is for police to make a wrong claim - which is legal - such as "If you don't cooperate with me, I'm going to arrest you and charge you with X". The police are absolutely allowed to lie to you, including about the law. Likewise, they will use your reluctance to cause a fracas against you to extract easy answers.
They may even, as you say above, move to detain you and bring you in. Again, actually having a lawyer here is what's important, and again, usually not something most people can afford.
The word "alleged" certainly concerns me w.r.t state surveillance, especially in the hands of law enforcement.
Connecting dots between convicted criminals is one thing, predicting crime based on associations with people who are not criminals is a whole other ballgame.
A predictive model isn’t always trying to predict the future. It can also be used to predict the present—to find hidden connections in (perhaps maliciously) incomplete data.
In this case, that’s to say: if an unnamed gang member committed a crime, but all the known members of said gang have alibis... maybe you should be looking for an unknown member of said gang. And who’s likely to be secretly in a gang? Well, someone whose friends are all in said gang would be a good first guess.
What about outreach workers? How many of them are going to have connections to junkies, homeless people and gang members?
Sincere good Christians often believe in trying to help the unfortunate. Some good Christians spend all their time surrounded by clean living church goers. Others spend a whole lot of time surrounded by folks who are from the wrong side of town and the wrong side of the law.
The point of these analyses isn’t to say “arrest this man he’s a criminal” but rather “closely observe this man and ascertain whether he’s a gangbanger or an outreach worker.” Of course it doesn’t even need to get that far since it’s trivial to tell the two apart from the network analysis alone (the outreach worker will have a far higher ratio of legitimate-world contacts to criminal-world contacts than the gangbanger).
And you don't think law enforcement agencies will quickly make the leap from "closely observe" to "harass and arrest for any tiny transgression in an effort to coerce a confession"?
Exactly. Instead of doing their job manually, or double- checking system output, they begin to take what the software package says as truth. Whether it’s just human nature, or overwork, or whatever, it’ll happen.
Because Law Enforcement generally behaves like people, and (like people), often take cognitive short-cuts when making certain assessments. In this case, the short-cut would be to assume that the individual has to be a criminal, and act as such.
Now, there are step that you can take to reduce these assumptions, but those are steps outside of the program, and would have to be introduced in tandem with the introduction of the program. Unfortunately, policy-makers often assume that any single idea that is funded (i.e. buying a predictive model from Palantir) is a comprehensive solution to the problem (often encouraged by the sales reps peddling whatever the 'solution' is) and fail to recognize that other programs will have to be funded alongside in order to deploy the program effectively.
If the software says Bob probably did it, Johnny Law will assume Bob did it and stop looking elsewhere, precisely because Johnny Law wants to close the case. Throw an overzealous DA in the mix, and it sucks to be Bob.
That's just not how it works. Eventually you have to present actual evidence in an actual court and it won't be "this software you never heard of said so".
The word ’predict’ gets used a lot and is sufficiently charged that it causes much Minority Report-style angst and gnashing of teeth, but really a better term would be ’uncover’. The whole investigative process is about uncovering concealed connections between people and events, so this is just a new technique. It isn’t inherently Orwellian either: there’s still a wide chasm of human investigators, prosecutors, judges, and juries between any kind of ’positive’ and actual consequences.
Well first of all, any time a private corporation does secret private proprietary unreviewable analysis on members of the public which is then used by law enforcement -- this is a terrifying, horrible dystopian miscarriage of justice on its own. People (i.e. the politicians who signed off on this) should go to jail for doing this. Police should be accountable to the public, allowing them to dodge this by using private entities who are not accountable to do their dirty work is quite simply criminal corruption.
Add on top of that it's being done by a company named after a mass surveillance device used for evil in a fantasy story.
And on top of that it's being done by a Thiel company. Thiel, who is nearly a perfect personification of evil: he has made very explicit candid public statements on how he opposes the idea of democracy itself, he does not think women should have the right to vote.
And on top of that it was being done without the knowledge or consent of nearly anyone in the city.
And finally, the justice system presumes innocence. The Palantir system does the opposite -- it makes wild, arbitrary untrackable inferences that suggest guilt for people without any real evidence. It is quite literally nothing more than a very thin shield police can use to justify harassing and intimidating the "kind of people who tend to be criminals", which in this case is not that at all -- it is "the kind of people who tend to get caught and prosecuted for crimes" i.e. only violent or drug crimes (except drugs white people use), only poor criminals, only minority criminals.
Where's the massive computer analysis system that looks for wage theft committed by employers? This is after all how the vast majority of wealth is stolen in the US, citizens could recover billions of their own money if it were stopped. Where's the computer system working with law enforcement to automatically detect any insider trading? Why don't we monitor the behavior of people in finance to detect cocaine use and then send in the SWAT teams? What about a computer system that detects bad prosecutors?
You are misreading his comments about women and democracy. He was simply saying that certain demographics aren't receptive to libertarians - he was not saying that women shouldn't have the right to vote and that democracy is bad. I daresay it's impossible to read his full writing in question and come to the conclusion one might get from simply reading an excerpt of two sentences that the Politico article wants you to see.
" It would be absurd to suggest that women’s votes will
be taken away or that this would solve the political
problems that vex us."
"I believe that politics is way too intense. That’s why
I’m a libertarian. Politics gets people angry, destroys
relationships, and polarizes peoples’ vision: the world
is us versus them; good people versus the other."
His essay here is pretty blunt about how he is opposed to democracy. He outlines how to escape democracy (he calls it politics sometimes in the essay and uses the two words essentially interchangeably, because they are the same). He outlines ways to escape from democratic governments, seasteading, techno-libertarian cyberspace, etc, where capital can rule uninfringed by the desires, needs, interests, votes of its subjects.
I was incorrect about his views on women, though, thank you for the correction I retract that completely. He simply sees women voting as a problem because they don't vote the way he wants (they are not by and large wealthy and powerful so do not vote exclusively for the interests of these constituencies, like Thiel) -- and uses this as an excuse to throw out voting altogether. I'm not sure that that's better but my characterization was inaccurate.
Mostly sad. Sure there is truth that your peers affect your decisions and options in life. Being branded for arrest or harassment by the police does not help...
I'm going to talk about a counter case that happened in real life, where a journalist was put on a "kill list" by the US Government. If you haven't read the entire [Drone Papers](https://theintercept.com/drone-papers/) I highly recommend doing so.
The government basically had a naive likelihood analysis program that attempted to group terrorists. If someone went to the same places as terrorists, in about the same schedule, then the system came to the conclusion they were a terrorist. In programming we use the duck metaphor - "if it walks like a duck and quacks like a duck, it's probably a duck."
What occurred was that a journalist who was interviewing these terrorists ended up on the kill list. In theory the human element was supposed to filter out such things - and I'm sure that's what the developers of the system intentioned. It's also clear the human operators were simply rubber stamping everything the system produced, and allowed a kill order on a journalist to go out.
And if you're discussing evaluating types, then our duck solution is just fine. If you're discussing firing hellfire missiles at unarmed people, then we probably need to take a step back. Predictive algorithms and network analysis are naive - they're fine for recognizing pictures of dogs and cats, or pinging a friend when you upload a photo with their face. They're not sufficient to apply to policing or resourcing, especially since algorithms display the exact same bias as their input data.
System designers, including myself, often expect human elements to act as reliably as algorithmic components. The truth is that humans are lazy, biased and easily prone to pavlovian conditioning (The system showed me a dialog, when I click OK the thing I want to happen happens.)
The other truth is that frequently these systems, even if they are honest - they show likelihood percents, their flaws have been explained to their operators, they were made with the best available data - are too complicated for most people to grasp. Most folks can't schedule a meeting across five calendars without assistance or get email marketing correct. Do we really expect them to grasp the nuances of a statistical network analysis model and correctly interpret the results on a repeated basis?
The solution is that we need to stop pretending an algorithm is a replacement for human decision making in these life or death cases.
I recommend reading the Math and Murder section of The Rise of Big Data Policing for a more researched view of how New Orleans is using Palantir (starts at 2nd half of page 40).
"A man is known by the company he keeps" is a concept that phase been around for millenia, and is no longer valid today.
The underlying question with all of these types of issues is that you are doing something "with a computer". Beat cops in community policing models do the same thing in an analog world. Good detectives know the territory, know the players and know where to look for things.
At what point do do we want to stop? I don't know the answer.
The definition of "company one keeps" is open to interpretation. How many Facebook connections away do you think you are from a drug dealer? Take him away, boys!
Social media also provides strength of connection. How often do you communicate with the drug dealer, at what times, how often is your GPS position in his house for 5 minutes.
Probably everyone who isn't friendless on Facebook has a drug dealer in their immediate network. But using more data you can filter out the actual druggies with high confidence.
> But using more data you can filter out the actual druggies with high confidence.
i'm not comfortable with our justice system acting on the "high confidence" of some proprietary heuristic with a closed source implementation and little scientific evidence to support its claims, and with little ability for citizens to vet its workings at a detailed level, to see whether it implements their values or the values enshrined by the constitution.
to say nothing of the fact that i have zero interest in using the criminal justice system to look for "druggies" (violent drug dealers, sure, because they're violent; drug abuse is a health problem that the criminal justice system is ill-equipped to deal with).
>to say nothing of the fact that i have zero interest in using the criminal justice system to look for "druggies" (violent drug dealers, sure, because they're violent; drug abuse is a health problem that the criminal justice system is ill-equipped to deal with).
Me neither, I was using your example (connection to drug dealers). You could similarly apply this to connections to murderous gangs and you have a similar argument. Saying drugs are ok is just detracting from the point.
can you explain why doing it with a computer might make it more valid? are you already highly confident that you have good heuristics to automate and scale? if so, what's your evidence? if the heuristics used by the software are proprietary, how do you judge whether they line up with your values without essentially running an experiment on society? doesn't that seem a little cruel if the software is an integral part of a process that violates citizens' constitutional rights?
a palantir employee does a great job of explaining why their technology is frightening, as part of pushing back on someone from NO who wanted predictions with numerical rankings for potential offenders/victims:
> “The looming concern is that an opaque scoring algorithm substitutes the veneer of quantitative certainty for more holistic, qualitative judgement and human culpability,” Bowman wrote. “One of the lasting virtues of the SNA work we’ve done to date is that we’ve kept human analysts in the loop to ensure that networks are being explored and analyzed in a way that passes the straight-face test.”
too bad carville didn't recommend palantir do some precrime analysis on politicians. but then that would be biting the hand that feeds you. surely palantir wouldn't stand for that.
I look forward when systems like these start having to adapt to actors start using cointel tools and seed information into their systems with the intent to see how decisions are made from them to start gaming it for their own ends.
The New Orleans police department is so insanely corrupt that if I related some of the stories I've heard from people who live there you would not believe them.
Frankly, cities like New Orleans, Baltimore and New Orleans need systems like Palantir, drones, license plate readers and etc.. They are like something out of Hobbes right now, with no effective state authority and violence levels unseen elsewhere in the developed world.
Once basic rule-of-law is established then programs like those can be scaled back but a lot of other countries would have established a state of emergency if they were confronted with the levels of violence seen in some of these American big cities. There's already a big push for body-worn camers for police, this is just a natural extension.
Nice, they mentioned the company I founded in college, PredPol. They claim that our algorithm encouraged over policing of minority neighborhoods, but there's no such thing as bad press right?
> independent academics found it can have a disparate impact on poor communities of color. A 2016 study reverse-engineered PredPol’s algorithm and found that it replicated “systemic bias” against over-policed communities of color and that historical crime data did not accurately predict future criminal activity.
If you actually look at the "study" it's clear that what they did was not particularly scientific. With such a broad and complicated topic, they basically just took the talking points they wanted to have about it and worked their way backwards from there.
In my opinion it really isn’t. Looking at the studies it references, they’re not much different from “press” anyways. They read like the questions and data presented were selected to prove a political point.
I’m not trying to just be ideological here either. I just don’t want the dominant ideology in academia and media to be used to attack and surpress technology that could ultimately benefit the communities they’re allegedly harming.
There’s really not any strong evidence to suggest that was happening. And you can’t test it because all it takes the accusation of racism and some pseudoscience and you’ll be shut down.
It makes some weird comparisons. Why have the heat map of drug users? Police are much more interested in drug dealers, and particularly drug dealers who are likely involved in gangs and violent crime. Unfortunately, those individuals are over represented in minority communities, as are violent criminals more generally.
It also throws around phrases like "historically over-policed communities" without ever actually explaining what that means or why it occurred. There seems to be a general pattern of taking "systemic racism" and equating that to racial discrimination when the two are not the same. There are problems within minority communities which are significant in driving these disproportionate outcomes, and if we just say "racism" and don't look any further that isn't even going to be considered.
If we want to reduce mass incarceration of people for nonviolent crimes, we ought to be reforming the laws, not reducing enforcement just to produce conviction stats we like more. That will almost certainly get people killed.
If police harass minorities, it generates the most data in the places they harass them. Then they feed this data into the computer and algorithmically generate "unbiased" results that prove they should be harassing minorities.... that is unless this tendency was explicitly corrected for somehow. Was it? If it was, that's its own can of worms, but is at least debatable as to whether there's an acceptable way to do so.
Importantly, these algorithms should not be secret. These are the kinds of choices a democratic society should make.
Yes, this is something we researched and thought about. The idea is "garbage in, garbage out". If the data input into the algorithm reflects a systemic bias, obviously any predictions are going to be a product of the input to some extent. Our predictions were based on the city's crime report data, which is clearly not an accurate picture of crime in the city, but it's what we've got. Sure, it's entirely possible that if police are dramatically over-policing certain poor or minority neighborhoods and generating a lot of crime report data based on those arrests that the predictions could potentially be affected. But poor and minority neighborhoods tend to under-report crimes compared to wealthier areas, so there is a balancing counter-effect to some extent. Another thing we've seen happen sometimes is systematic under-reporting or non-reporting of crimes to make the crime stats for the city look better, as the police department is often judged on these crime stats.
Ultimately we have to work with what we have and the police are responsible for maintaining high quality data and ethical standards, but I don't lose any sleep worrying about if my code systematically oppressed minorities for two reasons:
1.First of all, most police departments had separate predictions run per "district" and typically had a fixed number of officers allocated to each district. The idea that we were directing cops away from rich neighborhoods and having them go after the poor or minorities assumes that we had far more power than we actually did. The number of cops assigned to each city district / beat is a political or bureaucratic decision and not something our software is involved in.
2. Furthermore, our app was not directing the cops every move, nor was it trying to. We just told them "when you're not answering a call or doing anything else, when you have a spare minute, check if you're near a prediction box and go figure out why the computer flagged it as a high risk area. Maybe it's a parking lot and the lights have gone out encouraging car burglaries". And even as just a small slice of their time, we really had to push to get them to use the app because the way they saw it was "What does this stupid app know about my city that I, with 20 years of experience, don't know? One of these prediction boxes is in the middle of a lake". Based on what I saw from the analytics and usage logs, I think it would be tough to make a case that anyone was oppressed because the cops barely even used the thing! Not that I take these issues lightly, but I always laugh when people criticize PredPol or someone calls me a murderer on Twitter because it vastly overstates our impact. Maybe I should take it as a compliment that people really think I am powerful enough to direct police to oppress minorities using only a shitty rails app I wrote in my college dorm room. But whatever, you can't do or try anything new in this country today without getting dramatically criticized for it to fuel the internet-rage clickbait economy, so I guess it's just something all of us have to get used to. I haven't had a day to day role at the company for a few years, which makes it a lot easier not to worry about these things as well. But ultimately police using technology and data effectively can REDUCE bias and injustice AND make people safer. I think the questions the Verge presents are certainly worth asking, but based on my experience I do believe that the app I implemented helped more people than it harmed.
I didn't read it that way. To me, it seems like he's saying that how the tool is used pretty much eliminates it's downsides. It's not like this was being used to conduct no-knock SWAT raids or something.
I don’t think it was a bad tool. I think people who criticized it were worried more about what it could become down the road than what it actually was.
>Then they feed this data into the computer and algorithmically generate "unbiased" results that prove they should be harassing minorities....
Depends on how you use the data. If you learn success rates then it shouldn't depend on the absolute number of people searched. If you just feed raw data into machine learning algorithms, then nobody knows.
E.g. you have 100 stop and frisks, 50 in one and 50 in antother neighborhood. Then you might see that you got 1 criminal in one and 10 in the other neighborhood. Then your algorithm will allocate more forces (shouldn't go below a minimum, otherwise no data) in the 20% criminal neighborhood than in the 2% criminal neighborhood.
Obviously this will still lead to more searches in more criminal neighborhoods. But that's what you want. You want to use the police force as efficiently as possible to fight crime.
>not by the police department, but by a separate agency with a separate budget
Is that a thing that even happens? Most PDs I'm familiar with have a lot of control over their own cams. Even after getting caught using them to plant evidence, many still have some questionable cam practices.
> E.g. you have 100 stop and frisks, 50 in one and 50 in antother neighborhood.
NB: stop and frisk isn't done in the "good" neighborhoods at all. There's not going to be any data about what happens when e.g. stopping and frisking white men in suits in lower Manhattan because it will never be done. The idea of being able to do controls hinges on something for which there is no political will.
> Obviously this will still lead to more searches in more criminal neighborhoods. But that's what you want.
I would think instead you want more searches in places where there is probable cause, you have gotten warrants, etc. How comfortable are we with the notion that people should be targeted on the basis of the neighborhoods they live in? Is this compatible with our guiding documents?
>NB: stop and frisk isn't done in the "good" neighborhoods at all. There's not going to be any data about what happens when e.g. stopping and frisking white men in suits in lower Manhattan because it will never be done. The idea of being able to do controls hinges on something for which there is no political will.
I already addressed that caveat by forcing the algorithm to stop and frisk a minimum amount in the good neighbourhood. Like a minimum of 10% of frisks taking place in good neighbourhoods.
>How comfortable are we with the notion that people should be targeted on the basis of the neighborhoods they live in?
If I live in such a neighbourhood, I'd want more police presence. They will stop and frisk and do that to me, too. At the same time they will have the chance to stop some criminals that might kill or rob me and my family.
I think a lot of times as engineers we imagine the tech divorced from the reality of the world it exists in. It would likely be politically untenable to start stopping and frisking Wall Street bankers.
It doesn't seem that outlandish, though, that he might be in possession of controlled substances. The point of stop-and-frisk isn't to catch someone "heading to a mugging;" how would you even discern that?
> I already addressed that caveat by forcing the algorithm to stop and frisk a minimum amount in the good neighbourhood. Like a minimum of 10% of frisks taking place in good neighbourhoods.
That's exactly my point. It's fine to say that, knowing full well that there is absolutely no way this would ever happen. This line of thinking guarantees unequal application of the law based on neighborhood or other signifiers (in the US: ethnicity).
> They will stop and frisk and do that to me, too.
This is not true generally in the US. The way stop and frisk works is the police identify brown men and stop and frisk them. (It's worse if they happen to be in one of the "bad" areas, but in practice the main thing was they are young-looking brown men.) The stop and frisk program would have been stillborn if the NYPD were applying it evenly to people walking around New York City.
I honestly can't tell if you're trolling or asking earnestly, my apologies if you're just trolling...
Some of this has been addressed by other comments. But in short: there are a lot of ways to break the law other than stabbing or shooting.
Just as an example, I would wager that a stop and frisk in the area around Wall Street could yield lots of drug violations. These crimes are pursued vigorously in other parts of New York City. Why is it okay to stop and frisk some people in some neighborhoods, but not other people in other neighborhoods, when looking for the same violations?
I was sort of trolling, but this does sort of get to the heart of the issue.
Stop and frisk is sort of a shitty policy in my opinion because even if it were limited to weapons, some people may be carrying them to defend themselves. Laws against this are really shitty if you were to place yourself in the shoes of someone who is at constant risk of violence and just wants to defend themselves. If someone intends to harm you with a weapon, there’s little chance the police are going to intervene in the situation in time.
But outside of stop and frisk and just looking at policing in general, enforcing laws against nonviolent actions while you’re in areas allegedly because you’re trying to prevent violent crime is a huge driver of the disproportionate convictions of minorities. In my opinion, reducing police presence in violent areas is one of the worst solutions to this problem. That shows disregard for the people living there, regardless of how they may feel about the police. Reforming dumb laws that are getting relatively innocent people prison time would be far better. We need to have laws that allow police to be effective in combating violence without harming the entire area they’re policing with frivolous laws.
>Reforming dumb laws that are getting relatively innocent people prison time would be far better. We need to have laws that allow police to be effective in combating violence without harming the entire area they’re policing with frivolous laws.
Maybe there’s such thing as bad downvotes either, your light gray text just draws attention to your comment.
I’m kind of interested as to what the definition of “over-police” is in this context, though. Is it disproportionate with or without the consideration of crime levels?
It seems like over correcting for this problem could be very bad for the communities it’s intended to protect.
People who are making conjectures about the effects of the app really do not understand it or how it was used at all. It's reach and impact were far more limited than most people seem to assume.
My first class of my first day of college was Calculus I. My professor had been working on this project for about 7 years before that, developing a statistical model that could predict crime tomorrow using years of past crime report data. He had an early version of the algorithm implemented in an HTML page with three <textarea> boxes, and a button that invoked a JavaScript implementation of the PredPol algorithm. I helped him make the software better and more usable so we could get it into the hands of more police departments around the world. And then a couple of years ago I left.
It's a dangerous path to take as suddenly you can be labeled a criminal just for living in the wrong place or even worse if the police put their bias into the algorithms.