Accusations of bias in the criminal justice system are nothing new.
From the war on drugs, to the use of force in policing, to the sentencing
process, studies show that people of color and low-income residents are
disproportionately impacted.
So why not leave the tough decisions to an algorithm that knows no race or income?
It may sound like something straight out of Minority Report, but
artificial intelligence is already being used by police stations and
courtrooms across the country.
I'm sure you all understand the legalistic drawback to pre-crime methodology.
Here we go again.
Some believe it will increase objectivity, while others worry the tech will
simply exacerbate our pre-existing biases.
Policing, up until now, has not been a very data-driven profession.
That's why PredPol was founded, to take large datasets of crime data,
apply machine learning to it to then try to figure out the highest risk
locations for specific crime types each day and each shift.
PredPol has been around since 2012.
It's in the business of place-based predictive policing - that is, using
algorithms trained on past crime data to anticipate high risk areas.
As more and more police departments are adopting PredPol and other similar
software, academics and activists alike are increasingly concerned that
these systems are just encoding human bias.
The historical over-policing of minority communities has led to a
disproportionate number of crimes being recorded by the police in those
locations. Over-policing, historical over-policing, is then passed through
the algorithm to justify the future over-policing of those same communities.
Basically, Lum says algorithms based on past human decisions are not
impartial. Officers will find crime where they're looking for it, not
necessarily where it's most likely to occur.
But MacDonald says increased police presence acts as a deterrent, leading
to fewer crimes in the targeted areas, indicated with 500 by 500 square
foot boxes.
What we really want to do is have the officer be in that location enough to
preemptively drive down crime so that box goes away.
The actual effect of software like PredPol on crime reduction is unclear.
While places from Los Angeles, California to Jefferson County, Colorado
have adopted PredPol, there have been few independent evaluations and
departments have reported mixed results.
The Modesto, California police department got PredPol in 2014.
In a city where the 2016 violent crime rate was over 55 percent higher
than the national average and the property crime rate was 24 percent
higher, officers say it's helped them more efficiently utilize their
resources in an understaffed department.
It almost acts as a force multiplier for us.
We can work a little smarter, use our resources that we do have in areas
where crime is more likely to occur.
Since going to PredPol, our numbers have reduced dramatically.
In 2013, we had sixteen hundred residential homes that were broken into and burglarized.
By 2018, that number dropped to 600.
However, there's no way to prove definitively that the adoption of the
software is actually behind crime reduction.
Crime trends are going down and predictive policing algorithms aren't
necessarily contributing to the lessening of crime.
It's just a lot broader trend that exists without the technology itself.
Due to the difficulty of proving efficacy, a number of departments from
Richmond, California to Kent County in England have terminated their
contracts with PredPol.
But even if predictive algorithms are an imperfect solution, are they
actually worse than human judgment?
MacDonald definitely doesn't think so.
Our position and I think it's a defensible one, is that data, auditable
data and auditable algorithms are always going to be a more effective and
more fair and more just means of directing patrol officers than relying on
their intuition or their hunches.
Relying on intuition is also a concern when it comes to bail and sentencing
decisions, another part of the system where algorithms are starting to play a larger role.
Increasingly, defendants are assigned risk assessment scores.
These are algorithmically generated numbers based on their criminal and
personal history, used to predict their likelihood of showing up to future
court dates or committing another crime.
In some places, this software is starting to replace the cash bail system.
Traditionally, bail disproportionately impacts low-income defendants, who
languish in jail waiting for their court date just because they can't pay.
Part of the promise, right, is okay we're going to be letting a lot more
people go, but we're going to be using a risk assessment so we're only
letting the right people go, not the bad ones.
California declared cash bail unconstitutional in 2018, and New Jersey has
almost completely eliminated it for nonviolent offenders.
Instead, the state is using a risk assessment tool called the Public
Safety Assessment to help decide who is released pre-trial.
It takes factors like age, current charge, and criminal history into account.
In 2017, the first year New Jersey used the system, there was a
nearly 16 percent drop in the state's pre-trial jail population and no increase in crime.
But as with predictive policing, some worry this simply reproduces old patterns of bias.
While the technology itself might be neutral, the data points that we
utilize to actually put into these algorithms are not.
They are a result of systemic bias in policing, a legacy of punitive sentencing laws, punitive drug laws.
Another widely used system is the COMPAS risk assessment tool, made by the company Northpointe.
Controversially, it does take educational, vocational and environmental
factors into account, alongside more standard metrics like age and criminal history.
It provides two scores that predict the risk of general recidivism and violent recidivism.
While COMPAS wasn't designed to inform sentencing decisions, judges are
often given a defendant's score during this process.
And sometimes, judges have explicitly cited these scores in their decision.
Which then turns it into a Minority Report type of situation, where you're
trying to predict whether or not someone will, you know, commit a future
crime and sentencing them based off of that prediction.
Many courtrooms also develop their own algorithms, the inner workings of which are often kept secret.
According to ProPublica, judges in Arizona, Colorado, Delaware, Kentucky,
Louisiana, Oklahoma, Virginia, Washington and Wisconsin all use some kind
of risk assessment scores during sentencing.
A lot of companies oftentimes don't want to make the exact formula of their
algorithms known to the public.
But these are tools that are being used to make determinations about people's lives.
And when that's at risk, it's incumbent on people to know exactly what
variables are being considered against them.
The COMPAS risk assessment algorithm is not public, but when ProPublica
investigated the tool in 2016, it concluded it is biased against blacks.
That's because among defendants who did not recidivist, blacks were much
more likely than whites to receive high or medium risk scores.
While Northpointe would not comment on the record, it responded with its
own study which concluded that COMPAS is not biased, because it is equally
accurate at predicting who actually does recidivist, regardless of race.
From a statistical perspective, Northpointe and ProPublica are actually
both right, they're simply defining algorithmic fairness differently.
It is true that blacks are rearrested at a higher rate than whites, but
Lum says both sides ignore the fact that arrested data itself could be biased.
All of those analyses assume that the outcome, whether you're rearrested or
not, is some sort of gold standard truth about your level of criminality,
which again isn't necessarily true.
That is a human decision to make that arrest.
So if both humans and algorithms are flawed, what's the answer here?
Algorithms may be imperfect, but at least they're consistent and can save
people time and resources.
But Lum cautions that putting our faith in numbers makes it harder to
confront our biases and push for change.
When the predictions are coming out of something that is reported to be,
you know, objective and is frankly a little bit complicated, I think it
removes opportunities for that sort of reflection that would be necessary
to undo some of the biases of the past.
At the very least, many researchers, academics, and activists agree that
greater transparency and independent analysis of these algorithms is needed.
We're talking about whether or not people can be released from jail
pre-trial. We're talking about where police departments are going to
emphasize their resources.
So I think a lot more has to be done to, you know, be open and transparent
about the application of these tools and what data is it generating, and is it ultimately effective.
