I think "CSI" gets forensic science wrong
in a couple of ways. One is in the television
show everything happens a lot faster than
it happens in reality. The other thing is
that the—when I've watched "CSI", you get
the impression that the tests that they run,
like a DNA test or fingerprint comparisons,
are completely objective measurements made
by a machine. You'll stick the sample into
the machine, the machine will flash "Match"
and maybe even put up the picture of the person
who matches this fingerprint or this DNA profile.
I think what the shows miss is the extent
of which a determination that something matches
or is similar depends upon a subjective judgment
by an expert. In other words, human beings are involved in it.
That's part of what makes forensic science
interesting to me as a psychologist—is how
much human judgment and decision-making is
involved in the production of forensic science
evidence. The evidence that's presented to
the jury is not the product of some machine
only. It's also the product of a human being's
analysis of what the results of the instrument
mean or what the comparison means. The surprising
thing is how often different experts can reach
different conclusions when evaluating crime
scene evidence.
Can you give me an example from fingerprints
or DNA?
Well, from DNA—it's surprising how often
the—DNA evidence often is very clear-cut,
and it produces a clear profile where everyone
would agree it's either a match or not a match
with a particular sample. When the samples
are old or degraded or DNA from different
people is mixed together, it sometimes becomes
very complicated to reach a conclusion about
whether a particular person should be included
or excluded as a possible source. In those
areas, we sometimes see experts coming up
with—experts disagreeing over interpretations.
That's where, as a psychologist, I become
very interested because I wonder: what are
the processes that are determining people's
judgment about these things and whether the
judgments of some of the experts might be
biased, such as by other things they know about the case?
Can you tell me more about that?
Yes. Well, there was one instance that really
struck me where experts were disagreeing in
one of our local courtrooms here about the
interpretation of a DNA test, with the government's
experts saying that this was a DNA match that
incriminated a guy who was accused of rape
and robbery, and an expert called in, a university
professor called in by the defense lawyer
who was saying, "Well, I am not convinced
that this really is a match. There are discrepancies here that I think are important."
I later asked the government's expert, "How
can you dismiss the discrepancies that the
defense expert was pointing to?" She said,
"I know those aren't important discrepancies.
For god's sake, they found the victim's purse
in this defendant's apartment," which was
very striking because what it tells me is
that information about the purse, which supposedly
this defendant might have stolen from the
victim, was influencing the way in which the
DNA expert interpreted the DNA evidence and
was doing so in ways that might not have been
clear to the jury. The jury may not have known
and may not have understood that the conclusion
that there is a DNA match between two samples
depends partly not on the DNA evidence, but on something about a purse.
I think it's striking. It raises real questions
about what experts should be relying on when making these judgments.
Throughout this course, we've been dealing
with erroneous beliefs and claims where if
people believe them, it doesn't really do
much harm. I know you've had a lot of experience
throughout your career in forensic science
where faulty beliefs and claims can actually
a little bit of harm. Can you tell me about
that and tell me about some of the claims
that are made by forensic scientists?
Well, we've had cases where the wrong people
have been sent to prison for crimes they didn't
commit because forensic science evidence was
misinterpreted. I think that's a very clear
harm. When we look closely at those cases,
often they are revealed by additional forensic
testing, or additional DNA testing reveals
that the initial DNA test wasn't correct.
It's important to go back and look at those
cases and ask what went wrong. How could the
system have gotten this case wrong in the
first place? Often it looks like there are
elements of bias in the initial judgments.
In one case I looked at in Texas where somebody
was falsely convicted based upon a misinterpreted
DNA test, it looked like the DNA analyst had
been in communication with an eyewitness.
There were two pieces of evidence against
this defendant: there was a DNA match and
an eyewitness identification. The jury just
thought it was an open-and-shut case. They
convicted this guy in less than two hours
of deliberation. They just thought that there couldn't be any possible error.
It turned out when you look closely at the
eyewitness identification, it had been done
through a showup that was extremely suggestive.
The defendant was the only person who was
shown to the victim and he was shown in a
way he was basically driven by her while he
was inside the police car, and he was forced
to wear a hat that was like the hat the perpetrator
had worn. The victim said, "I think he is
the guy," even though he was much bigger than
the guy she had described who had committed
the crime. There was a faulty eyewitness identification
which may have influenced the DNA analyst
to think she had the right guy and to look
at the evidence in a very confirmatory manner.
There were certain things about the DNA evidence
that were consistent with this person being
a contributor to the DNA, but if you looked
at it in the full context, you could see other
things that were inconsistent. The analyst
credited the findings consistent with her
expectations and ignored the ones that weren't
consistent and therefore reached an erroneous
conclusion of a DNA match.
When the case went to court where the eyewitness
was going to have to confirm the identification
with the person actually there, by then the
eyewitness had already been told that there's
a DNA match to the guy. Any uncertainty the
eyewitness might have felt about her identification
was probably allayed by knowing about the
DNA evidence, and so we have two faulty pieces
of evidence each supporting each other. What
appears to be an unassailable case for the
jury is actually built on a house of cards,
and it turns out they got the wrong guy. I
think that's a real harm. He served four years
in prison for a crime that he did not commit.
Later, DNA testing found the actual perpetrator.
That's good. You said it often comes down
to human judgment and interpretation. How
is it that two people or a person can look
at some piece of forensic evidence and come
to the wrong conclusion? What are the cognitive
mechanisms that might be going on?
Well, it can only happen when there is some
ambiguity in the evidence itself. As I've
said earlier, in some cases the evidence is
just clear-cut and no one would disagree,
but in other instances, there's an ambiguity;
there could be multiple interpretations, and an expert judgment is required.
Okay, when experts approach a task like that,
just as any other human beings, they can be
influenced by what they expect to see or to
some extent by what they desire to see. People
who expect to see something and are highly
motivated to see that thing are more likely
to see it. They're more likely to interpret
an ambiguous stimuli in a manner that's consistent
with what they think or want to see. We all
do this. Most of the time our use of expectations
to help us interpret stimuli is very helpful
because most the time our expectations are
correct, but sometimes they aren't correct.
The problem for a forensic expert is how to
prevent this process of what's sometimes called
Observer Effects, the tendency to see what
one expects or desires to see, how to prevent
that from coloring one's interpretation of
the evidence in ways that undermine the quality
of the evidence that's going to be presented
to the jury. I think the best way to do that
is to try to minimize the amount of contextual
information that the expert receives. If the
expert approaches the comparison not knowing
whether it's supposed to match or not supposed
to match, or what the answer is supposed to
be, then it's more likely that the expert's
judgment will be determined just by the scientific
data and won't be colored by the surrounding
contextual information that may create what
we would think of as a bias.
My sense is that the justice system works
best if the scientific experts are basing
their conclusions purely on the science and
don't allow those conclusions to be influenced
by other factors, such as other evidence that
might suggest that the person did or did not
do it, or police theories of the case, or
their suspicions about the case, and so on.
The same argument is made for use of blinding
procedures in other areas. When instructors
grade exams, often they do it without knowing
the student's name. I think it's a good practice
because it prevents the professor from being
influenced by other information about the
student that may lead the professor to think
that this student is likely to perform well
or not perform well. We would like the instructor's
grading of the examination or the paper to
be based on what's in the examination in the
paper and not in any of the surrounding information.
Same thing should go for forensic scientists.
Are there any claims that forensic examiners
make that are difficult to test or are not supported by evidence?
Well, the National Research Council of the
US National Academy of Sciences did a report
in 2009 on forensic science, where they reviewed
various areas of forensic science. They concluded
that a lot of the claims that are commonly
made by forensic scientists in court—they
have not been adequately validated by scientific
research.
Particularly, they were concerned with claims
of some kinds of forensic examiners that can
identify things with certainty—fingerprint
examiners saying they can determine that a
print found at a crime scene came from a particular
finger to the exclusion of all the other fingers
in the universe. That would have been a very
hard claim to prove, and the sense of the
scientific community is that there has not
been adequate research to justify claims that strong.
Some of the other kinds of claims that are
problematic are claims of forensic scientists
that they can assess the probability that
certain propositions are true, like, "I think
it's highly probable that the bite mark found
on this victim came from this person's teeth."
To go from the examination of the similarity
of two bite marks to the conclusion that the
bite marks were probably made by the same
person requires a certain leap of logic, and
a lot of academics are questioning whether
forensic scientists can make that leap or
whether they're making that leap too readily
without adequate scientific documentation that they can do it.
That has created a lot of controversy about
what forensic scientists are saying in court
and what they should be allowed to say. That
has become a hot issue right now, something that's being discussed widely.
What about DNA evidence? Is DNA evidence infallible?
Well, it's certainly not infallible. The DNA
evidence is often presented with very impressive
statistics that measure the probability of
a coincidental match between two DNA profiles.
An expert might say the two DNA profiles we've
compared have the same genetic characteristics
and those characteristics would be found in
only one person in a billion in the human
population. That's very impressive, of course.
Those estimates have some scientific basis,
but they don't necessarily tell a jury what
a jury needs to know. The one-in-a-billion
estimate is how rare the profile is. It tells
you nothing about the probability that the
profiles could match by mistake or by accident.
Laboratories do make errors when testing DNA
profiles. Sometimes DNA evidence is misinterpreted,
as I've mentioned to you earlier. We've seen
a number of instances where labs make mistakes
such as cross-contaminating samples, so DNA
can accidentally be transferred from one sample
to another in the crime lab. We know these
are not common events, but they happen sometimes,
and they happen much more commonly than one in a billion cases.
There's a margin of error in DNA testing as
there is in any other scientific process.
Part of the difficulty is we really don't
have good estimates of what that margin of
error is. We'd like to think it's low. There's
some evidence that it's probably not as low as one would hope.
Can you tell me about the prosecutor's fallacy?
Well, when statistics are presented about
the frequency of matching characteristics,
lawyers often disagree about what those statistics
mean. I first noticed this about 30 years
ago when I was a young professor and I was
starting to talk to lawyers about forensic
science and forensic statistics. Back in those
days—those where the days before DNA testing
was introduced and there was a lot of serology
testing. It would be common for labs to compare
blood-group evidence and say that two blood
samples came from somebody with the same protein
and enzyme blood markers and that those markers
would be found in 1 person in 100. Basically,
we found a match; matches the defendant; it's
a 1-in-100 match.
Then when I would talk about this, I would
notice that different lawyers would have very
different interpretations of what that would
mean.
The prosecutors tended to say, "A 1-in-100
match—that means there's only 1 chance in
100 that this defendant is innocent because
he's either the source of the blood, or it's
a coincidental match and a coincidental match
occurs with a frequency of 1 in 100, so there's
1 chance in 100 this was a coincidence; 99
chances in 100 he's guilty." That was their interpretation.
Now it turns out that interpretation is fallacious,
and we call it the Prosecutor's Fallacy just
because I know these prosecutors are doing
it a lot, not because prosecutors do it or
only prosecutors do it. It seems to be very
common among news reporters also.
The fallacy is this: although only 1 person
in 100 would match, in a particular community
that might have millions of people, 1 in 100
could be thousands of people. The defendant's
only one of thousands of people who would
match it. The evidence narrows down the possible
sources of the evidence, but it doesn't narrow
it down to the point where we can say that
there's a 99 percent chance that the defendant
is the source. In fact, we really can't tell
the probability that the defendant is or is
not the source based upon the blood evidence
alone. You have to consider the strength of
the other evidence and whether this defendant
is more or less likely than anybody else would
match to be the source.
That's one kind of fallacy people come up
with, thinking that you can determine the
probability that the defendant is the source
of some particular sample based on the rarity
of the characteristics that match the defendant
to that sample. You really can't do it.
I mean, the defense lawyers tend to make another
error. It's not just prosecutors who do these
illogical things. Among defense lawyers, defense
lawyers would say, "Well, my client matches
on a characteristic found in 1 person in 100,
but 1 in 100, there are like thousands and
thousands of people in this community who
would match and he's only one of thousands,
so the chance he's guilty is one in thousands.
Therefore the evidence is practically worthless for determining whether he's guilty."
What that fails to recognize is that this
forensic evidence drastically narrows the
population of people who could be the source
of the sample. Ninety-nine percent of all
potential people who could be the source are
eliminated without eliminating the defendant
who may already be a suspect. Knowing that
the defendant matches a characteristic that
is as rare as one percent should greatly increase
your confidence that he's the source in ways
that go beyond what the defense lawyers recognize.
The net result is the—evidence from forensic
science can be very powerful, but it has to
be interpreted in light of all the other evidence in the case.
The people get into fallacious or erroneous
thinking when they think that they can draw
a conclusion about the ultimate issue of guilt
or innocence or whether the defendant is the
source or not the source from this forensic
science alone in isolation without considering the entire factual context.
Got you. It sounds similar—can you tell
me about the Texas sharpshooter fallacy and how it applies to DNA?
Well, the Texas sharpshooter fallacy is based
upon the story of this famous Texan. Back
in the old days when Texans would carry firearms—I
guess they still do, but back in the days
when Texans would carry long guns, a particular
Texan visited his neighbors a mile away from
his own farm and claimed to them that, with
his long gun, he was the most accurate shot
of all time. To prove this, he picked up his
long gun and he fired shots toward his farm
a mile away, and he invited his neighbors
to come over and see where the shots had landed.
He claimed that he had aimed at targets on
the side of his barn. The neighbors came over
a little later, and when they got there they
saw that there were targets painted on the
side of his barn and there were bullet holes
in the center of each of the target on the
side of his barn. They were incredibly impressed
that he could hit the targets from over a
mile away, and they acclaimed him the best
sharpshooter of all time.
The fact is he had actually cheated a little
bit to do this. He had fired at the barn,
but he painted the targets after the bullets
had already hit. We sometimes call this "painting
the target around the arrow." There's actually
a Swiss version of this story, I'm told, where
it involved a marksman who was firing arrows.
You paint the target around the arrow or you
paint the bulls-eye around the bullet hole.
By doing that, it allows you to look like
your predictions are much more—or your system
is much more accurate than it really is.
Okay, now the question is: what does this
have to do with forensic science? I've argued
in academic writing that forensic scientists
sometimes engage in a process that's very
much like what the famous Texan did, that
when we look at DNA analysts, for example,
and they're comparing a DNA sample from the
suspect to a particular evidentiary profile,
sometimes the evidentiary profiles are a little
ambiguous, hard to interpret, and what tends
to happen is that they'll look at the suspect's
profile and use that to help them interpret
the evidence profile in a way that causes
the interpretation of the evidence profile
to be closer to what the defendant's profile
is.
In effect, they then find a match and they
say, "The bulls-eye hit the target. We found
a match." They compute the probability of
that occurring by chance, but the computation
is mistaken because what they don't take into
account is that they were able to move the
target around. The bulls-eye got painted after
the bullet already hit. It's after they knew
what the defendant's profile was that they
interpreted this other profile. They did it
in a way that caused the match to be more
likely. That then distorts the statistics
in ways that can dramatically cause a dramatic
underestimation of the probability of a match by coincidence.
We've done some informal experiments in which
we have engaged forensic scientists in interpreting
DNA profiles and so on. We've been able to
show that, in fact, they do this. Sometimes
the world creates what we call naturalistic
experiments where analysts are required to
interpret evidence without knowing what the
defendant's profile is. If this process really
occurs, we should expect sometimes that they
would interpret the evidence in a way that
would exclude a suspect, but then when they
see the suspect's profile, they change their
interpretation of the evidence. In fact, that
happens. Sometimes they actually admit to
doing this. When you ask them, "Why did you
interpret the profile in this particular way,"
they'd say, "Well, I considered the defendant's
profile. I think that this part of the signal
must be a true signal and this other part
must be noise because the true signal is what
matches the defendant." They're admitting
what's in effect a circular logic.
This is a long-winded way of explaining a
fallacious form of reasoning that can lead
to misestimation of the strength of DNA evidence.
I think it happens for other forms of forensic science as well.
It may well be part of what happened in the
Mayfield fingerprint error case. I don't know
if you've talked about this, but there's a
famous error in fingerprinting that arose
out of the Madrid train bombing in 2007, I
believe. A terrorist bombed a train in Madrid,
and a number of people were killed. Investigators
at the crime scene found a plastic bag that
contains some detonators. On that plastic
bag, they found a fingerprint. The fingerprint
was searched through large databases, and
it was "matched" to a man named Brandon Mayfield
who was a lawyer in Portland, Oregon. How
did that match occur?
Later they found out that wasn't Mayfield's
fingerprint. There was an Algerian suspected
terrorist who matched it even better, they
later found out. For a time, they were claiming
that Mayfield was a match even though there
were some discrepancies between Mayfield's
fingerprint and this print on the bag. The
parts that matched, they said, "These are
the good data. We will count these." The parts
that didn't match, they said, "Well, maybe
the bag was distorted or maybe there was an
overlay." They credited the data consistent
with the hypothesis of match. They discarded
the data inconsistent. As a result, their
interpretation of the target changes over
time in a way that makes it more likely that
this wrong person will match. That's an example
of fallacious reasoning by a forensic scientist
that can—or a group of forensic scientists
that can lead to a wrong result.
The solution to this kind of thing is to use
more rigorous procedures. We know from psychological
studies and from experience in many areas
how to make forensic science more rigorous
than it is. The question is whether people
in the field are willing to go to the extra
effort needed to improve their scientific
rigor. I hope they are.
Do you think it's important to test claims
and opinions when lives and livelihoods are on the line?
Why, no, Matt, I think we should just allow
people to say anything they want regardless
of whether there is a scientific basis for
it.
All right. I'll reframe...
I'm joking, of course. The answer would be
yes.
Human beings often come to believe things
that are not true or not fully warranted.
Tom Gilovich has this famous book about how
we know what isn't so. We know from research
in psychology and from a long history of human
error that people often can believe things
that are unwarranted or unjustified or not
fully supported. Because we all, as human
beings, have this tendency to jump to conclusions
and frankly to make mistakes, I think it's
really important when lives and fortune are
at stake in these opinions that we test them
out and check them. That's what science is
all about. That's, I think, one of the great
advantages of science over other methods of
developing knowledge—is the commitment to rigorous testing of beliefs.
How do we go about testing these kinds of
claims?
Well, I suppose it depends on what kind of
claim we're talking about. When we're talking
about scientific claims, scientists often
use the term validation when they're talking
about testing of claims—validation, meaning,
to make sure that the claim is valid. I think
part of what you have to do is look very closely
at exactly what the expert is claiming, and
then think about how one can test that that
claim is true. If an expert is claiming that
he or she can distinguish among individuals
using a DNA or a fingerprint test, one way
to put that claim to the test would be to
submit to them samples that you know come
from the same person or a different person
and see how accurately they can distinguish
those. See how often, if you give them samples
that you know came from the same person, they
say that they are from a different person
or how often, if you submit samples known
to be from different people, they claim that's
from the same person. The rates of those kinds
of errors will tell you a great deal about
how accurate the claim really is. I think
it's very important that we do those studies.
Some claims are about the general ability
to discriminate among things. Other claims
have to do with the rarity or frequency of
certain characteristics. If an expert says,
"I've looked at these two different DNA profiles.
They're the same, and DNA profiles of this
type would be found in one person in a billion,"
well, how does the expert know that? Obviously
the expert hasn't tested a billion people
to know that. A certain amount of mathematical
extrapolation must have been involved. One
needs to examine very carefully how that conclusion was generated.
In the case of DNA evidence, the claims are
supported by doing research on the frequency
in various populations of certain genetic
markers or characteristics. The samples they
use are not billions of people. They often
are hundreds or thousands of people. To get
from a database of 100 or 100,000 people to
a claim that 1 in a billion person matches,
assumptions have to be made that the markers
are sort independently and they are statistically independent of one another.
Now are those assumptions true? Well, there's
been a lot of debate and discussion of that.
There's some support for these assumptions.
Whether the support's adequate is still debated
in some circles. I think the conclusion has
been that they're true enough for government
work. They're true enough that we allow experts
to make them when testifying in very important
criminal cases, which doesn't mean that they're
proved beyond all possible doubt.
You were involved in the case with O.J. Simpson.
Can you tell me a little bit about that?
Well, that's now very old case. In 1994, O.J.
Simpson, who was a well-known former football
star and then TV pitchman, was accused of
murdering his former wife in Los Angeles.
There was a great deal of DNA evidence involved
in the case: blood samples everywhere. At
the time, I was a professor at UC Irvine,
where I am now and had done a lot of writing
about DNA evidence. I had also, in the role
of lawyer, litigated cases on the admissibility
of DNA evidence. I suppose it's not surprising
that O.J.'s defense team asked me to join
them as 1 of 12 lawyers in the so-called dream
team that was put together to defend him.
I served as defense counsel during the criminal
trial that began in 1994 and terminated in
1995 with Simpson's acquittal. Most of what
I did was analyze and work on the DNA evidence.
What the defense team did was look at evidence
that appeared to incriminate Mr. Simpson and
see if we could generate alternative explanations
for that evidence consistent with his innocence
and see if we could support those explanations.
I think it's a little-known fact that psychology
played a big role in Simpson's acquittal, at least in my view.
The defense team included a psychologist,
me, and we consulted with a number of psychologists.
At one point, we even approached Daniel Kahneman
about being involved in the case. Although
he didn't ultimately work on the case, Jay
Koehler worked on the case. Gary Wells was
consulted on the case. A number of famous
psychologists were consulted on the case.
The defense team, in constructing theories
of the case, was strongly influenced by a
psychological theory called the story model,
developed by a psychologist named Reid Hastie
and his colleagues at University of Colorado
and Northwestern University. The story model
is a theory about how people come to believe
and accept theories of a criminal case. What
it tells us is that—it tells us the circumstances
that make a particular story or theory believable.
It involves things like logical coherence
of the elements, how completely a theory explains
the evidence at hand and so on. I would say
that in designing Simpson's defense, the defense
team was guided heavily by theoretical notions
from the story model. We were trying to tell
a compelling story of the case that would
explain the evidence and be consistent with
Simpson's innocence. The psychological theory
helped us determine what elements that story had to have.
That's the little-known story about the role
of psychology in the acquittal of O.J. Simpson.
That sounds great.
The reason Simpson was acquitted despite what
appeared to some to be an overwhelming amount
of DNA evidence that incriminated him was
that the defense was able to construct some
theories that had to do with accidental contamination
and some intentional planting of evidence
and was able to present at least some support
for those theories, sufficient support to
make those theories plausible in the minds
of the jury so that the jury believed that
the prosecution's theory of Simpson's guilt
was not the only possible explanation. There
was another story, if you will, that was plausible
enough that the jury had to take it seriously,
and that created the reasonable doubt that
led to Simpson's acquittal.
This course is about the science of everyday
thinking. What advice do you have for people
out there who want to think better and do
better in their everyday lives?
That's a good question. Boy, I'm not sure
I've mastered how to think well myself. I
think we all struggle with thinking clearly,
with marshaling our thoughts. Sometimes when
I have been benefited by trying to be very
systematic and decompose problems into elements
and think about them carefully, piece by piece,
but the fact is I rarely make actual decisions
that way. I think a lot of our decision-making
happens intuitively, through processes that
we don't fully understand and really can't
analyze. The kind of advice that I give people
about making better decisions is to be careful
about what information you allow yourself to consider.
If you're a forensic scientist and if you
want to avoid being influenced inappropriately
by extraneous information, make sure you don't
know that information. If you're an instructor
and you want to avoid being influenced by
how attractive or charming the students are
when you grade their exam, grade their exams
blindly.
There are circumstances in which less information
sometimes leads to better decisions. Knowing
what those circumstances are and then blinding
yourself to inappropriate information is maybe
one of the best ways to improve your decision-making.
My name is Bill. I think about proof. I think
about how proof is generated. I think about
how people respond to proof. I think about
proof that's put forward that isn't really proof.
What don't you think about?
