share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
SPEAKER: This is a presentation by Human Development Outreach and Extension at Cornell University.
PHOEBE C. ELLSWORTH: OK, well, thank you. I mean, I'm here about once every 10 years. And every time I'm here, I wonder why I'm not more often. I really love it. Second thing I'd like to say is that this work is mostly the work of Barbara O'Brien, who is, shall I say, the next [? Sammy ?] Summers. I mean, the next graduate student who just finished. And so you'll be seeing a lot of her. She is currently an assistant professor of law at MSU.
So I'm the sort of person that uses PowerPoint mainly not for words. So you're not going to have everything here. Since the advent of reliable DNA testing in criminal investigations, the public has become-- this is basically where I left off at the last stop, for those of you who were there-- come to recognize that innocent people are being convicted in numbers far greater than anybody used to think.
And news stories of another person being exonerated in a rape case or another person leaving death row because it turns out to have been the wrong guy, there's now 124 of those have become pretty common. 24 of death row. Lots more rape cases. And it's probably the most important development in the criminal justice system in the last quarter century.
Psychologists have not been behind in investigating the sources of this sort of problem. And you're probably all familiar with most of them. What do I point this at? Oh, that'll do. So first of all, mistaken identification. In 81% of the cases of exonerated people, you have somebody who said, that's the guy, or rarely, the woman, when it wasn't. So Elizabeth Loftus, Gary Wells. This is a well-researched field.
Secondly, one that's not so well researched but is pretty important figuring in between 30% and 50% of cases is tainted lab work and junk science. So police labs are often very closely connected to police departments. And so the police department sends over the sample, and the sample says, this is the guy and this is what we found inside the woman. This is the guilty guy, and this is what we found inside the woman. Is it a match?
And the people who find lots and lots of matches tend to get rewarded and promoted more than the people who say inconclusive or no match. So Bill Thompson is the person who's done a lot of the work on this with the scandals of the Houston lab, and so forth, that this also says, just because you've gotten a DNA match reported by a cop lab doesn't mean that that's the guy.
And then junk science is things like hair analysis, fiber analysis, voice stress analysis, brain fingerprinting. Every few years, there's a new thing, new technology, that's going to go straight to identify the liar. And they're all basically the polygraph, or else they're finding DNA in pieces of carpet, which you can't really determine very well.
And then you have false confessions. Here, Saul Cass and Richard Leo are the people to read. Surprisingly figure in 22% of these exonerations that they've actually gotten the person to confess. When we look at these cases, a couple of things stand out to us. The first one is you read about the exoneration, and you look at it, and you think how flimsy the evidence was in the first place.
A single witness identified a black person sitting in the back seat of a car that was speeding by, and that's the case that sends this person to death row. A voice stress analyzer, or a brain fingerprint, or a confession extracted from a feverish, sick 14-year-old after nine hours of questioning. And then they've got their guy and he goes ahead.
So although all of these factors contribute to false confessions, there's still another question, which is, why? I mean, why did-- and not the jury. I mean, the jury doesn't have a lot of choice here because they get what's presented to them. But why did the cops and the prosecutors think that they had a case that was so well worth pursuing? And in addition to this evidence, it shades over into prosecutorial misconduct. If you haven't quite got enough evidence to convince the jury about the guy you know is really guilty, it's not really immoral to build a little bit more to bolster the case. And that happens.
The second thing that stands out that lots of people have noticed is how frequently prosecutors and police, after the DNA acceleration, stick to their original hypothesis, say, I don't care. I know it's true. And through them, they can persuade the victim that they were right. The joke that goes around but based on real cases is the story of the unindicted co-ejaculator.
So you've got a rape case. And somebody raped her. She identifies him. You get the sample from the rape kit. You've convicted him. He's tested after the fact, and you find out not the guy. Ah. Well, there were two people in this rape. The prosecutor says, first time you've ever heard the two person hypothesis. My guilty guy wore a condom, and the other guy who's disappeared into thin air just left his semen in there, and that explains everything. And you sort of said, this is goofy. I mean, can't you hear what an idiot you sound like with this hypothesis?
[LAUGHTER]
But people actually do. And Carol Tavris and Elliot Aronson actually have a nice new popular book that you can use in your courses called Mistakes Were Made-- But Not by Me-- that covers both the legal context, the political context, and the doctor context, and in the marital context, for those of you who want a little self-help here.
I mean, the factor that cuts across all of these, and the one that can occur with any kind of evidence, and the one that we're interested in here is confirmation bias. You all know the pioneering studies, but this is a great example of where basic cognitive psychology has become wildly relevant. But [INAUDIBLE] you give somebody a series of numbers. 2, 4, 6. This represents a concept. Tell me what the concept is. You can do this by guessing numbers.
Ooh, says the subject. How about 4, 6, 8. How about 6, 8, 10. And on and on they go. And occasionally, some out-of-the-box thinker will say 1, 3, 5, or something like that. But very few people do downward sequences. Very few people say, how about 151, which actually would fit the hypothesis. Here, you have nothing at stake.
So there's a terrific review of this, by the way, by a guy named Nickerson in 1998, reviewing both the basic and applied research. What the confirmation bias leads to is first of all, you look for information that's consistent with your hypothesis, your belief, your stereotype, whatever. And then here's the one that's a certain danger to people in our profession. You test the hypothesis in a way that's likely to confirm your hypothesis. And you think, shall I use these materials or those materials. And your intuition tells you, this is more likely to work. For reasons that have nothing to do with my theory, I'll go with that one.
Then you believe information-- this is also very true of us-- that supports your hypothesis. But when the experiment bombs, you go after the methodology. You criticize. You minimize. You say, OK, how should we change this so that we can get the phenomenon that we actually know is really true? And you remember evidence that fits your hypothesis and conveniently forget evidence that doesn't. And finally, if the new evidence is ambiguous, you manage to make it consistent with the hypothesis. So those are the prongs of this.
What we did-- and we're going to be talking about students in this one. Anybody who knows how to get a prosecutor to be a subject, I will be in love with that person forever. But some of our prosecutors have not expressed deep enthusiasm about being research participants. But that's where we're going next, we hope.
So Barb O'Brien was, before coming to graduate school, a criminal defense lawyer. So she had access to all sorts of materials. And we made a very realistic file of a shooting case, a murder case, which had about an hour and a half or more of materials. And it was the story of a guy gets shot in his house, and the cleaning lady finds him the next morning. So there's nothing much from around the time of the crime.
Over the course of the crime, a weak case develops against one guy. But there are other people. He was fired by the victim a few months ago. He doesn't have an alibi. Sort of hesitant identification of him in the neighborhood near the same time. It also is like a police file in that it was quite boring. I mean, there's tons of stuff there. You have the initial report. That's Mr. Briggs. And then you have all sorts of things like applications for search warrants, and exciting information like, who were all the cars that were parked in the neighborhood at the time.
This is what police reports actually look at. And it was given to the subject in the sort of disorganized way that the police might have gotten. Subjects loved this study. They all watch those TV shows and they think they can do it. And to get what looked like a real police report, they were ready to do it.
So the first study was really simple. The idea is that before you have a hypothesis, your search is relatively open-minded. But simply having a suspect or a favored suspect, a prime suspect then turns you into confirmation bias mode. So I've written about this before in a couple of highly criticized articles in which I argued that single person show-ups were not so bad and lineups were not so great, because often when the cops do a show-up, they have no idea who did it. They're driving around the neighborhood. They're looking. They show the witnesses, is the guy? The witness says yes or no. And the cops accept that answer.
By the time they do a lineup, they know who the guilty guy is. They are not looking for information from lineups. And therefore, the motivation of confirmation bias is strong, and the opportunity to bias the witness is also pretty strong.
So our basic design in study 1, which will be followed in some of the others is they read this police file for an hour. And then you get this weak case emerging. And then we do the random assignment. And half of them are set asked to say, so you know all the information isn't in yet and you're going to be getting more information. But for now, who do you think is the person most likely to have done it, and what are your reasons for that?
And the other subjects just don't get that manipulation. For the first study, they came back a week later for another hour. In some later studies, we found it made no difference if they just took a break and then did the other hour. But we were trying to put in more time delay in the first one. And in the second hour, they get a little more information that's consistent with Briggs' guilt. They find a gun in his house that's the same caliber as the murder weapon. Unfortunately, that's the most common caliber of gun there is around. And they get some information that's inconsistent.
So the witness who saw him in the neighborhood within 10 minutes of the crime, turns out they were an hour off about the time of the crime. So not only is that witness's testimony less relevant, but a lot of the other suspects who had alibis no longer have such good alibis anymore. So they should theoretically be back in the pack.
They find a stash of cocaine in the victim's home, and they learn that his nephew, who had gone into huge debt gambling and was a ne'er do well stood to inherit big time if this guy died. And then as I said, also more consistent with Briggs. Our dependent measures where memory for facts that supported or didn't support your hypothesis.
Then we said, you can choose-- here's a list of 26 different lines of investigation you might follow. You get to choose the three, because you don't have all the time in the world. You're a police officer and you can't do everything. Which one are you going to follow? And then we see, are they ones that are consistent with Briggs being the guy, or are they other ones?
And then we faced them with the inconsistent or ambiguous evidence. We say, OK, subjects, so what do you think of this identification now that you find out that the crime took place an hour later than you thought? What do you do about the fact that the witness said he had a goatee and he couldn't have? And stuff like that. And then we code their answers for whether they say that should make you less certain of guilt, it should make no difference at all, or it should make you more certain of guilt.
So here are the lines of new evidence that they wanted to pursue. And as you see, if you-- this is out of three-- if you were asked to name a suspect, you were significantly more likely to pick lines of evidence that were consistent with Briggs' guilt. This is a fairly strong test, because we didn't throw out any subjects. And some subjects guessed somebody else, not Briggs. But they're still in there weakening the effect.
And obviously, a lot of people-- this is over here-- who didn't tell us the hypothesis. Briggs was the most plausible person to have done it. So the difference is whether they've actually told us and given us reasons. And then this is the findings on interpreting ambiguous evidence. So zero means you think it just doesn't cut one way or the other, these inconsistent facts, that it makes it no more or less likely that Briggs is the guilty guy.
And the people who had named Briggs are right there. They say this inconsistent evidence is basically neutral, whereas the person who hadn't named him are more likely to say, this goes from minus 2 to plus 2, that this gives them some pause in their evaluation of Briggs' guilt. So this particular hypothesis that naming him worked across all the studies that we've done. So I'm not going to show you that graph every single time.
The next question was, how do you undo it? Is there any remedy that you can come up with? I mean, Barb in particular is action-oriented and wants to create reforms in the legal system. So getting people to consider alternative hypotheses, which is one of the methods often used in the JDM literature, was the first one we did.
Same case. Same basic design. The same two conditions as in study 1, except that we now added a condition where the subject names a suspect and discusses the evidence against him, as they did in study 1, but also gives the reason why Briggs' might not be guilty. What are the pieces of evidence that cut against guilt in this case?
And then we had the blockbuster condition where we said, OK, you name three suspects, and list the evidence why they might be guilty and why they might not be guilty. And what we found there was somewhat to our surprise-- did I do two? No, I just did one-- that-- wow, this is actually-- OK, that's where I want to be.
The first two bars basically replicate study 1, that if you're asked to name the suspect, you're more likely to focus on future evidence that's consistent with that person's guilt than if you're not. And if you consider the reasons for and against why Briggs in particular, why he might be guilty, why he might be innocent, that really helps. You so you're back down to just about where you were if you didn't announce a hypothesis at all.
But the blockbuster manipulation was as bad as anything, that when you list three people and the reasons for and against, you focus much more on Briggs. You're much more confident that it's Briggs, and so forth. So that was a somewhat unexpected finding. I should say, the other measures in this study all went in the same direction, but didn't reach significance. So the interpretation of ambiguous evidence and stuff like that.
So one of the things we're doing is following up on this effect, where I'll be interested in your thoughts about what might be going on. But one might be that over there when you're considering three people pro and con, you're running into a Norbert Schwartz fluency problem, that you can't think of all these reasons for the second person and the third person, and it's getting harder and harder. And so you say, didn't have this problem thinking of things for Briggs. Must be him. I feel even more sure of that.
And the other somewhat related is the feeling that basically having done all this work, you don't have to do any more work. You've really considered this case thoroughly, and now you can move into the home stretch on it. Those are pretty hard to disentangle from each other when you're trying to think of a good method, which is why the follow-up study to this isn't done yet, because we haven't thought of a great way to do that.
So in the last study I'm going to talk about, we decided to draw on a different body of psychological literature as a source of hypotheses about remedy. And that is the accountability literature that people have said that if you make people accountable for some aspects of the decision, that that's going to improve decision-making.
So we looked at outcome accountability, first of all. That, people have argued, should not help. Lerner and Tetlock basically say, process accountability will improve decisions. Outcome accountability won't. Outcome accountability is we tell participants that we're going to interview them afterwards about how well their decision matches the actual correct decision reached by the judge and the jury.
In the process, we say, we don't care who you end up talking about, but we want to see whether you use the right kinds of investigative methods that a real crackerjack police department would use, and we're going to have your responses evaluated afterwards by a top FBI consultant. In the persuasion condition, we told them that we're going to tape record. All of these people are tape recording things. Their arguments for why they chose it, and have other people in a jury of the future rate these arguments for how persuasive they are. And then the last condition was a no accountability condition.
So what we were vaguely predicting was that outcome should be good, and the others should be not so good. And what we found is that there was no difference in any of the accountability conditions. They basically didn't improve things over no accountability, but-- oops-- thinking you were going to have to persuade somebody was really bad, that that was the one that made people choose lines of evidence that focused on Briggs more than on other people.
And this is interesting because this is, of course, exactly what prosecutors have to do, that as they develop their case, they move from investigating mode, to confirmation mode, and finally to how am I going to frame this to the jury, which is where the fake evidence often comes in, because you say, oh, gee, my snitch is not a very persuasive person. The jury would really be happier if we had a really clean-cut, nice eyewitness. And then you have the line-up with the clean-cut, nice person who eventually-- often after several tries, which the jury doesn't hear about-- identifies the guy.
So persuasion made things worse. And also not only in what you look for, but in, again, zero means that the ambiguous evidence is neutral with respect to whether Briggs is really guilty. And it really is bad, right? So people in all of the accountability conditions didn't differ significantly how in their interpretation of these new inconsistent pieces of evidence, but the people who thought they were going to have to persuade were least disturbed, least likely to think that this made it more probable that Briggs actually wasn't the guilty guy.
One criticism of this study was, who cares that your hokey little FBI is going to evaluate this? Nobody believes it. It's stupid, the accountability. There's nothing riding on it. So we replicated this whole study with some more conditions and offering them a large gift certificate to borders if they managed to do a convincing job with whichever evaluation it was-- process, outcome, or persuasion.
And the results were identical to this. The persuasion still had pretty negative effects, and none of the accountability of manipulations made any difference. It's still possible if you were a learner, or Tetlock, or somebody like that. You'd say, you're a social psychologist. You haven't been in this business very long. That's not really the right way to manipulate accountability.
So it's possible that we'll follow up on this. I'd rather try to get real prosecutors at this point. But because I never have been able to keep the accountability results straight in my mind, it's probably the real reason for this. So what we've got at the end of the day, which is not the end of the day since this research will still be going on, is that naming a suspect consistently and across all studies increases confirmation bias, makes you more likely to think that that suspect is guilty, to focus on evidence that's relevant to his guilt, and to downplay evidence that's inconsistent with his guilt.
If you have them list the evidence for and against a whole bunch of subjects, that also increases the bias. And if they expect to persuade others, that increases the bias. So sadly, the results so far show that we've come up with many a way to create more bias in the prosecutors, but the only thing so far that really has worked to decrease the bias, other than preventing them from having a hypothesis in the first place, which probably can't be done, really, is to have them go through and systematically list the reasons not only why the suspect might be guilty, but why the suspect might be innocent.
I will now talk about-- these are the dependent variables that worked consistently. One is, which lines of investigation do you want to follow in the future? You can't do everything. Where are you going to go? Briggs or not Briggs? One is we have this new evidence that comes up, which is either ambiguous or inconsistent. Do you think this cuts against the Briggs hypothesis or not?
No group was so biased that they actually thought that the new evidence made it more likely that Briggs was guilty. And finally, I didn't talk about this, but if any of you are familiar with Dan Simon's work, in some of the research, what he finds is that if I've just found somebody guilty and eyewitness evidence was consistent with guilt, now I think eyewitness evidence in general is a more valid source of information, or if it was a confession, or general principles consistent with my decision also in most but not all of the research have changed from the beginning of the study to the end of this study.
Here's what most people don't put in their talks. Here's a whole bunch of things that didn't work in this line of research in case any of you are interested in doing related work. Some of them I believe, and some of them, due to confirmation bias, I really don't believe. I think that the hypothesis is still true.
Severity of the victim's injury. I went into this and Barb went into this thinking that when the crime is really severe, the pressure on the cops is going to be more the outrage at not getting somebody, the need for closure, perhaps, is going to be greater, and so you're going to get stronger effects.
And we did this. And when I tell you what the manipulations were, you'll probably say, you dummy, how did you think that would work? And it's probably true. So in the second study, which I didn't report here, either the victim was killed, was murdered, which, in law, is a more severe crime, or he merely ended up a paraplegic.
[LAUGHTER]
In retrospect, clutching my confirmation bias to my heart, I think some people might think it's actually worse to be a paraplegic. You have to look around at this paraplegic person struggling along in his Christopher Reeve life, instead of being nicely dead, and so forth. So maybe we didn't actually manipulate severity in the nicest, clean-cut way that we should. And what we need to do is child molestation and stuff like that that really gets people's outraged juices going. So I actually think I'm right and this is not just confirmation bias, but the evidence isn't in yet.
OK, nothing on accountability. And some of you who actually know something more about accountability might have suggestions about this afterwards. This was a strong recommendation of Frank Yates that this was going to be the variable that solved all our problems. And it was the variable that just sat there, like a lump. And we didn't get any memory effects. We also had a pretty elaborate memory test.
And when we had the two sessions two weeks apart, we got 0.0 shucks level maybe trending, but basically not enough to report. And it may be that even with our whole piles of paperwork and stuff that the memory load was not as much as it would be in a police department. But I don't actually feel-- I'm less biased in favor of memory showing a big effect in this than I am for the severity case, because I think they're stripping it down to a pretty small simple story, and they're not remembering a lot of stuff consistent or inconsistent with Briggs' guilt. Beginning.
So the real world-- let me just move this back-- has many more elements than there were in this study that are going to exacerbate this bias. First of all, the reward system for police and prosecutors fits in with this bias and is likely to exaggerate. You are rewarded for catching criminals and getting them convicted. You are not rewarded for coming out like Mr. Super Boy Scout and saying, aha, I've discovered a flaw in our case. I believe we've got the wrong man.
And nobody comes out and says, wow, what a great cop you are, unless they do it 20 years later or something like that. Then you can start get credit. But if you're in the middle of investigation and you're the person who says, I think we've made a mistake, particularly if you can't say, I think it's really so and so and I've got this good information, then it's not good for your career.
And so one thing to think about in terms of system level variables, is there anything you can do to make that sort of decision more rewarded than it is now? The second thing I want to say is that in the real world, it does, in fact, shade over into misconduct more than you would think. I mean, reviewing a lot of, if you read, Scheck and Neufeld, for example, Actual Innocence or read a lot of the cases, you find that in addition to misperceptions, the prosecutors do become more active, and that they're not using the voice stress analyzer because they think it means anything.
So people who mistakenly believe in the voice stress analyzer, they believe they've got the guilty person and they need some evidence, so they bring in Mr. Fly By Night Voice Stress Analyzer Expert to do it. And they get a bunch of ambiguous lines and then they say, aha. That just shows. They may say that to themselves. More often what they do is they say to the suspect, particularly if the suspect is young or not too swift-- and many a person who is arrested is not really very smart, I should say. Probably most of the smarter criminals are the ones who are not arrested. And he is the guy who's left standing there when the police come.
And so they say, look, here's your polygraph record, or here's your voice stress analyzer record, or whatever it is, this is really bad. This shows that when I asked the question about whether you knew how it was done, your line went all over the place. Now, the subject has no idea when the questions were asked or what the line was meant, but it's we've got the goods on you. That's the step 1.
Step 2 is we found your fingerprints and somebody who saw you there. Many of you may be surprised to know that this is perfectly legal. The police, during the course of an interrogation, can tell you that they have all sorts of evidence that would be quite convincing to somebody that if you were there and you're academics, you try to make things fit, and somebody tells you, but your fingerprints are there, you think, how can this be? How can I make sense of it?
Now, if you didn't already know this, here's something to learn from this colloquium. If they tell you that, you say to yourself, they were not. That is a lie. I mean, that is the hypothesis to go with. Not, how did my fingerprints get there? Because that's the step on-- you're going to be confessing one of these days if you start there.
[LAUGHTER]
And so it was, in fact, surprising to me that all of this is perfectly OK. And then if you're having some cognitive dissonance about how these fingerprints got there, then comes the you blacked out and had amnesia theory. And so you say, it often happens in moments of high stress, routinely happens that people just can't remember anything, even though the did the thing.
So even people like us, if we had no contact with the criminal justice system, might begin to think, yeah, I've heard of amnesia. I'm a psychologist. That's a real concept. Could it possibly have happened? So there are other things about false confessions that are merely keeping somebody there for hours, and hours, and hours, telling them that the only way that they can go home to mom is to confess, and so forth.
But the out-and-out lying tends to strike us as a little bit more upsetting. That's allowed. What's supposedly not allowed is you have exonerating evidence, and you're supposed to tell the defense about that. This, I think, the fact that that doesn't happen is really part of real confirmation bias, because you don't think it's very exonerating. I mean, you can see how defense lawyers who twist everything to look as though it's consistent with innocence might. In fact, somebody photographed him in a different town at the same time, nah.
[LAUGHTER]
So you just don't turn it over. And then you have the whole coaching witnesses planting evidence kinds of things, that all of this doesn't seem that way to the prosecutor. Once you've gone far enough with confirmation bias, you think your job, you have discovered who did it. That's not your job. Your job is to get a conviction of this obviously guilty person.
And so your job is to persuade people. And so therefore, quite often, your case is less good than you thought, sometimes for reasons like you have perfect evidence from your search. Unfortunately, your search was illegal and the evidence is inadmissible. And that's a really hard one for people to say, OK, I throw the whole case out, or you have very good evidence, you think, from your pet snitch who you've worked with for many years and who hasn't lied to you before, but he's not really going to be great on the witness stand when his record comes out and what he got for testifying for you comes out.
So in order to get this obviously guilty dangerous person behind bars, you take other steps. And so one possibility is you might say, OK, so let them have their cognitive biases, but absolutely draw the line at any of these further behaviors. And what we've been thinking about is is there any way that you can stick a spoke in the wheel earlier on in the process, and prevent people from going far enough down the wrong track so that they no longer can back up? And that's all, folks.
[APPLAUSE]
SPEAKER: This has been a presentation by Human Development Outreach and Extension at Cornell University.
Watch the full presentation with speaker footage and slides .
Dr. Phoebe Ellsworth, professor of law and psychology at the University of Michigan, discusses confirmation bias as a source of false convictions in this colloquium sponsored by the Department of Human Development and others.