share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
SPEAKER 1: This is a presentation by Human Development Outreach and Extension at Cornell University.
GARY L. WELLS: What I really want to end up focusing on is this notion of the creation of distorted retrospective judgements, especially the creation of false confidence, because I think that-- this is something I've been intrigued with. I'm going to tell you about an effect called the post-identification feedback effect that we first demonstrated 11 years ago, 19-- 2000-- no. What was-- 1998. 1998, yeah, would be 11 years ago.
And it's intrigued me ever since, and it's really a very interesting phenomenon, in part because it's just so powerful, so robust. But I'm going to take an emphasis here on-- instead of what I did in [? Valerie's ?] class yesterday, which is take it in the direction of saying what does this tell us about the need for the US Supreme Court to revisit this issue. And it turns out, it tells us something extremely important in that regard.
What I want to do with this group is talk about-- which I'm still working out-- the question of why does this happen? What's the psychological-- what psychological process? What do we know about it?
And of course, in general, we're talking about eyewitness identification, the identification of criminal suspects from lineups. Most of this, most lineups, actually the identifications happen with photos. Over 90% of all initial eyewitness identifications happen from photos, not live lineups. Live lineups are much more rare.
So here's what I want to do. I want to start off by talking about-- just so everybody recognizes this role of mistaken identifications in convictions of the innocent-- a very quick, brief history of the role of scientific psychology here, modern history, talk a little bit about the general methods that have been used in this area that are very straightforward, and then something about relative judgments, which is sort of the dominant staple, kind of, idea about how mistaken identifications happen.
And then-- but then I want to talk about certainty, because certainty is really the nexus between these mistakes and wrongful convictions. OK? If it wasn't for certainty, these mistakes would not go very far, and so that's where this post-identification feedback effect comes into play, and I want to talk about-- I want to talk about the question of why does this occur, because it's a very, I think, a very interesting phenomena.
When we look at analyzes of conviction of the innocent-- and criminologists, really, have looked at convictions of the innocent for a long time, you can go back to the 1930s and find treatments of that-- but it really wasn't until the 1990s, until about 1992-- so pretty recent, if you think about it-- that definitive exonerations of innocent people started to come about. And it started come about because of the development of forensic DNA testing. And so prior to that, all these, sort of, maybe this person was innocent were sort of that type. Probably innocent, but how do we know for sure? Now, what we're talking about are definitive cases of innocence.
And what these cases show-- which I'll run through sort of quickly with you, the idea behind them-- is that mistaken eyewitness identification was the primary evidence used to convict these people. We began looking at these in earnest as they were unfolding, and then published an article in 1998, in which we looked at what we all ought to want to know, what everybody ought to want to know. And that is, here are people who were convicted, people like Kirk Bloodsworth, of murder and rape.
Kirk Bloodsworth, you know, had never been in trouble with the law in his life. He was an upstanding citizen, who had served his country well in the Navy. A knock comes on the door, and after that, bad things happen. He was convicted in 1985 of this murder and rape. He was sentenced to death row.
Fortunately, death-- you know-- death sentences operate pretty slowly in this country, and so it was enough time for forensic DNA testing to get developed, which was not available until beginning in the '90s. And on a technicality, meanwhile, his sentence was reduced the life. He served nine years before he was exonerated, but the evidence that led to his conviction was five mistaken eyewitnesses.
If you look down the right-hand column-- this was table one from that article we published in 1998-- this is an analysis of the first 40 DNA exoneration cases. 90% of them are cases of mistaken identification. Often, there's something else, a little bit of something else. If you watch the 60 Minutes-- there's Ronald Cotton, the top one up there. If you watched that 60 Minutes a couple of weeks ago, Jennifer Thompson was the one who misidentified Ronald Cotton.
Yeah, it wasn't just Jennifer Thompson who-- wasn't the only evidence. The prosecutors will tell you, well, there was the similarity of shoes and a flashlight. Yeah, what does that mean? Well, you know, Jennifer said, you know, he had black shoes. And sure enough, what did they find when they did a search warrant on Ronald Cotton's place? Black shoes. And Jennifer said, well, he had a flashlight. What did they find in his place? A flashlight! So those come into play.
But of course, those things, even though prosecutors count them as part of the evidence, aren't really driving this. What's driving this primarily are mistaken identifications, 90% of these cases. That figure has slipped a little bit as we look at these cases. There are now 232 definitive DNA exonerations, 177 of which are cases of mistaken eyewitness identification.
Here are 99 of those people who were mistakenly identified, and I'm only putting 99 up there because I couldn't fit 177 onto a slide. There are still are a couple I don't have pictures of. I've met many of these people. You know, these are very interesting people. I think the average amount of time served in prison before they were exonerated is right around 10 years. Again, some of them were, in fact, on death row.
But this is only a small slice of the cases. You know, it can only be a small slice of the cases. Why? Because, first of all, when you go back and, you know-- the innocence projects that look into these cases and prisoners who say, you know, I was mistakenly identified and blah, blah, blah, what you find is that the biological evidence was not collected, or you find that the biological evidence was not collected properly. It has to be done properly.
Or it was destroyed. There were no laws about this. As soon as the person's convicted-- legitimately convicted-- that evidence can be destroyed and commonly was. Or the biological evidence deteriorated, because heat and light are the enemies of that evidence, or the biological evidence was lost. And so, most cases will never-- we can never test whether they were mistakenly identified.
More importantly, the biggest category is there was no biological evidence. And in fact, every one of these DNA exoneration cases are cases of sexual assault. Now, it may be sexual assault plus murder or sexual assault plus robbery, but they're sexual assault cases, not because sexual assault witnesses are poor eyewitnesses, but because that's where the DNA is that could trump an eyewitness. OK?
And in fact, most serious crime do not leave behind any definitive biological evidence. It's rare to have any definitive that would determine the perpetrator, DNA-rich biological trace evidence for murders, muggings, burglaries, drive-by shootings. How excited is this guy going to have to get robbing the 7-Eleven to leave behind that kind of evidence, right?
So if you're mistakenly identified in all likelihood, DNA isn't going to come to your rescue, unless you, you know, were involved in a sexual assault, or it was a sexual assault case.
We estimate that fewer than 5% of eyewitness identification cases have any potential for biological tests, for DNA tests. So what that means is at a minimum, the number of undetected out there has to be 20 times greater than the number detected. OK? So all of a sudden, you're looking at somewhere close to 4,000, but even that is a great underestimate because even the sexual assault cases are an underestimate, since the evidence is no longer there for most of those cases in the first place.
Now, this has changed the landscape, you know, of this work on eyewitness identification, because prior to these DNA exonerations, you know, all we had were our lab studies. But there is an interesting history there. The post-1992 DNA exoneration cases really come as no surprise to eyewitness scientists. They're only a surprise to the legal system, and they are a surprise to the legal system. They're shocked. Even defense attorneys are like, he was innocent? OK? I mean, the system itself has been shocked by this.
Janet Reno was so shocked when these first about dozen cases came out-- and they were all eyewitness ID cases, you know-- that she called me and brought me to Washington to ask me, what's going on here. And that led to some good things.
But we weren't surprised. We didn't know forensic DNA testing was coming along, but we had looked a lot at eyewitness identification evidence, based on our experiments, and we know that there are problems.
In fact, by the late '70s, psychological science was blowing the whistle on this evidence. Look at our writings. Look at our articles. In top journals, top psychology journals, saying, hey, there's a problem with this evidence. Look at this. Look at all these mistakes that get made, right?
And we were also showing what we call a belief accuracy gap. In other words, belief in eyewitnesses is way up here, accuracy of eyewitnesses is way down here, and this gap, you know, we had identified. So you know, the experimental literature, interestingly enough, was rather prescient here.
There are, of course, a large number of variables that lead to error, and I'm not going to talk so much-- I'm going to talk about one process-- but I'm not going to talk so much about the variables that lead to mistaken IDs, but you can see reviews of these. It's a large number of variables that contribute to this.
Now, the legal system is paying attention. Some good things have been happening. In particular, I would like to point to the procedural reforms based on the psychological science. This is an interesting success story, because the states of New Jersey, North Carolina, Wisconsin, and Minnesota have reformed their procedures in ways that we have articulated they should reform them, based on the psychological science, based on articles in psychology journals. Cities like Boston, Denver, Dallas, Santa Clara County, California, and other smaller jurisdictions have changed their procedures as a function of this, and changed, in particular, one that I think is very critical that relates to the talk I'm giving today.
Now, it still is the case if you add up all of these, my best estimate right now is that about 20% of the US population is now covered by these reformed procedures. We've got 80% to go, and it's taken a long time to get that 20%. So how long it will take, you know, to bring in places like Georgia and Alabama, I don't know. But you know, North Carolina came in, and that's a pretty conservative state.
So it doesn't anything to do with conservatism necessarily. And in fact, interestingly enough, both North Carolina and New Jersey, these were Republican-led efforts, believe it or not, because this is a justice issue. It's not a liberal/conservative issue.
And it's a justice issue in part because, look, anytime you get the wrong guy-- right-- it means that the person who committed the crime is still out there offending. When Ronald Cotton was misidentified by Jennifer Thompson-- if you saw that 60 Minutes-- Bobby Poole was still out there committing sexual assault. So this is an issue of also getting the bad guy, but basically getting the right answer.
Of course, what we do in this work is pretty straightforward. To an experimentally-oriented audience like this, you could have created this paradigm just in your head, if you'd never read anything about eyewitness identification. What happens is we create events. We create the things that witnesses see. The beauty of that is we know exactly what happened. We know who the perpetrator is and so on and so forth.
We expose people to this over and over again. We have them then-- going to have them then view a lineup, and when they view this lineup, again, since we know who the perpetrator is, it's really one of our people because it's some kind of staged event or created event, we can score this for accuracy, whether they pick the right person or wrong person, whether it's a hit, a miss, a correct rejection, an incorrect rejection, and so on and so forth. So we get identification decisions from them, and we can look at the certainty with which they make those identifications.
So we can study the relationship between the accuracy of the decision and the certainty of the decision, and then we can go in, of course, and systematically manipulate things like the nature of the witnessed event, the characteristics the witnesses, young, old, black, white, male, female, and so on. Things like instructions given prior to viewing a lineup, the type of lineup that they view, we can create and try to invent new types of lineups to see if they work better. The behaviors of the lineup administrator, which turn out to be very important, and it's really what we're going to talk about today is the behavior of the lineup administrator. In particular, how that affects certainty.
Now, one of the things that is a pattern that we have observed over and over and over again, and really holds up extremely well, and it has become sort of a staple conceptualisation within the eyewitness identification area with regard to lineups, is something that I just simply called the relative-judgment process. Actually, I introduced this clear back in 1984. So this is a long time for one idea to have held on for so long, but it's really-- it's a simple idea. Simple ideas are always the hardest ones to think of. It's easy to come up with complex stuff, I find. But the simple stuff, well, that's really difficult to think of. I'm not sure why that's the case.
I remember Danny Kahneman telling me that, you know. It's the simple stuff that's hard to think of. Really complex things, yeah, those are easy. It's a strange idea, but I believe it to be true.
This is such a simple idea that, in fact, it just almost seems absurd to talk about. Eyewitnesses tend to select the person who looks most like the perpetrator relative-- relative-- to other members of the lineup. In other words, what eyewitnesses do when they see a lineup is they compare one person to another, decide who looks most like them, and then home in on that person, and that person is likely-- has some reasonable likelihood of being identified.
Now, I spent quite a number of years after this showing this process in very complex ways. Let me show it to you very simply, because, again, the simple stuff is the hardest to think of, but I finally figured out a simple way to do it.
What we did was to illustrate relative judgments was we staged a crime 200 times for 200 separate witnesses. And for a hundred of those witnesses, we showed them this lineup, and 54% were able to pick him out, pick out the perpetrator. Nothing magic about that number. That's going to go up or down as a function of lots of things. Right? How good of a view they get, how long between the time of the crime and the time of the lineup, and so on.
All these witnesses were warned that the actual perpetrator might not be in the lineup, and 21% make no choice. They're either saying he's not there, or I can't be sure enough to pick him out. OK? But what was important here was the question OK, what happens now for the other 100 randomly assigned of these witnesses, if we take the real perpetrator, eliminate him from the lineup, and replace him with no one.
And we give those witnesses exactly the same instructions. The person who committed this offense, you know, who you saw commit this crime, might or might not be in this lineup. That's not how we do it. We don't put in an extra, of course.
[LAUGHTER]
Now, the question is, where does the 54% go? It has to go someplace. I mean, that's a category you can't choose any more. All these numbers have to add to 100%, so it to go someplace. One possibility is that it slides down here, joins the 21%, so now 75% are making no ID. After all, you pulled the real perp out. That 54% should slide down there, but that's not really what happens, and you sort of know that, because otherwise I wouldn't show you this slide.
Instead of 54% sliding down there, only 11% slide down there. The dominant tendency is to go to the next best guy. That's relative judgment. OK? Who now looks most like him? Well, now, number four does. Number four's jeopardy tripled, roughly tripled, because the perpetrator was absent. That's the problem with relative judgment. Some member of the lineup's always going to look more like the perpetrator than the remaining members of the lineup, even when the real perpetrator's not there.
Now, we did this work, and I did the relative judgment notion, you know, as you saw, starting in the mid '80s, and, you know, described the fact that the real problem's going to occur when the real perpetrator's not there. Every DNA exoneration case involving mistaken identification is exactly like that.
When Jennifer Thompson-- I keep saying that as though people in here saw the 60 Minutes thing, but I think many of you did, from what I can tell-- when Jennifer Thompson-Cannino viewed that up and identified Ronald Cotton, Bobby Poole wasn't on the lineup. Had Bobby Poole been on the lineup, he would have looked more like the perpetrator than the other people on the lineup, but he wasn't, so Ronald Cotton looked more like the perpetrator than the other people in the lineup. Ronald Cotton is the one who ended up serving that 8 and 1/2 or 9 and 1/2 years, or whatever, before DNA finally freed him.
Now, there are lots of implications of this. This simple idea is actually very rich, and it gets used over and over again by eyewitness identification researchers, looking at a variety of phenomena. but, you know, just a couple of them to mention. This is why, for example, it's critical that you have to warn witnesses prior to viewing a lineup that the actual perpetrator might not be present. OK?
Now, we did, in the experiment that I just showed you, and you still got relative judgments, but it's even worse than what I showed you if you don't give this warning, all right? So this is this should be a standard warning. Interesting. North Carolina, before they made their reforms, which they made by law-- their legislature passed these reforms that include this warning, unanimously last year, so that's the law there now.
Prior to that, survey of North Carolina police departments, first of all, 80% of North Carolina-- and a survey in Texas did the same thing, and a survey in Georgia yielded the same thing-- 80% of police departments don't even have procedures. OK? Nothing in writing. Nobody can really tell you what their procedures are.
The other 20%, well, 15 of the other 20%-- so all but 5% of departments-- never gave this warning, said they never intended to. Right? So as intuitive, as obvious as it may seem to us, it has not been obvious to the legal system to do that.
Now, most modern jurisdictions now tend to do this, or some variation of this, but this is also why you have to be careful about the fillers. Fillers are non-suspects who are in the lineup just to fill it out. OK? Sometimes if it's a live lineup, they'll pull from jail cells. In Virginia, where they do some live lineups, they'll sometimes use police officers as stand-ins as fillers, which is a really bad idea, But because they're all there like, you know-- and the suspect, you know, innocent or guilty is like deer caught in the headlights, and anybody can pick out who the suspect is.
But in photo lineups, these are people who are ideally, they were in a jail cell at the time, but you know they didn't do it. And that's what-- and so there's only one person who is a suspect in the case, and everybody else is a filler. So that's filler.
But you have to be careful how you select the fillers. The filters need to be selected in a way that they're plausible, that they fit the description the witness gave. In this case, the witness described the perpetrator, as a black male with Afro style hair. And I don't know if you can see this guy's hair, but it looks just like this guy's and this guy's, and so everybody can look at that and recognize who the suspect is, so that's highly suggestive. Right?
And in fact, what we find is that-- and again, who's going to look most like the perpetrator? The guy with the Afro, right? I mean, but you already knew that going in, and so this is why-- these are among the implications of this relative-judgment problem. Right?
Oh, I just said. OK. This is also, if you know about the sequential lineup, which we invented-- one at a time, line up where you tell the witness, hey, I have a number of people to show you, you're going to view them one at a time, and you have to make a decision on that one before I show you the next one. The idea was to prevent witnesses from making relative judgments, just comparing one to another to decide who looks most like him.
They instead would encounter-- every person they encounter, they can compare more to their memory to decide is that the person or not. And since they don't know, they can't just say, oh, they get to the third, oh, well he looks more like him than number two did, because well maybe number four's going to look even more like him. So you can't use that strategy. Right?
And in theory, it makes witnesses dig deeper. In practice, it appears to be a much more conservative criterion. It raises the decision criterion, so you get fewer choices overall. Some people in the legal system don't like that, although every jurisdiction that I just showed you, they've all switched to sequential, that whole list.
Now, but I want to talk about certainty, because the self-reported certainty of the eyewitness determines lots of things. It determines whether prosecutors will decide to prosecute. It determines whether judges will permit the eyewitness to testify, even though the procedure was highly suggestive. This is the US Supreme Court Manson versus Brathwaite in 1977, which was the last word that the US Supreme Court has ever, ever said about eyewitness identification was the statement they made in 1977, which I'm showing through other means to be highly flawed and needing revisited.
The certainty that eyewitness expresses is a primary factor in determining whether or not jurors believe that the eyewitness made an accurate identification. So everybody is influenced by the certainty of the witness. Certainly of the witness matters.
Jennifer Thompson-Cannino, positive, absolutely positive. Many witnesses will take the stand and say things like I'm 150% sure. 150-- how do you exceed 100? Well, when did the scale not stop at 100? But for some, it doesn't even stop at 100.
It's the nexus between mistaken identification and convictions of the innocent. It's the grease. It is-- if a witness makes a mistake and they say, but, I can't be sure, it's not going to get prosecuted, and so on. So this is the key. It's not just a mistake that results in a wrongful conviction. It is a mistake plus high certainty, or what I call false certainty.
We do know that there is some relation between certainty and accuracy. The most favorable-- from a meta analysis-- the most favorable subset of the data that we can find says that the correlation could be as high as 0.40 under really pristine conditions. Now, that's not very impressive, given that-- especially since that's just a subset of the data, the most favorable subsets, but let's say it was there. Even that is appreciably lower than the correlation between height and gender. OK?
But what makes it worse-- which I want to talk about-- is the way that identifications are commonly obtained-- in every jurisdiction except the ones that I listed for you, which have made reforms now to prevent what I'm going to talk about-- leads to ambiguity and problems about the meaning of eyewitness identification certainty. And
So what I want to talk about then is the creation of false certainty. And the key to understanding the creation of false certainty is to recognize that eyewitnesses can be influenced even after they've made a choice from the lineup. OK?
Now, I first began to worry about this when I started seeing some cases in which we were able to prove-- but still not know exactly what it meant-- that after the witness would make an identification-- so the witness looks at some kind of photo lineup and says, uh, number three?
Good! You identified the actual suspect. The response of the detective, who's administering the photo lineup.
You talk to detectives, they say, yeah, I always tell him. In other words, no one's hiding something here. Right? They just come right out and say it. Good. That's the guy. Good, yes, you got him. Right?
But where it really began to-- and I started worrying about it. I started thinking like, what is-- should you be doing that?
And then I came across a case where the defense called me. I had already stopped doing criminal cases, and I actually don't do them now because I want to maintain relations with police and prosecutors to get them to change their procedures, and they don't like defense experts, so I don't do that and haven't done it for many years now. But this defense attorney calls me, and he says, you know, I know my client's innocent, but this witness, we just had a preliminary hearing, and this witness is positive. How can she be positive and yet wrong? Because I know she's wrong.
And I said, well, I don't know, but when you go to trial-- you know, now she's already on the stand. Right? Yeah. Well, when you go to trial, you've got nothing to lose here, in my opinion. This rule that says, you know, don't ask a witness a question you don't know the answer to, that's not necessarily true.
And in this case, I said, ask the witness what, if anything, the detectives-- because there were three detectives who administrated this photo lineup to her-- what if anything the detectives said or did when you pointed-- asking the witness-- to number three in that photo lineup.
And there had been three detectives in the room at the time. So he asked her, and she said, they clapped. OK.
Now, that was it for me. I'm like, that's it. So I got together with one of my PhD students, Amy Bradfield, and I said, we've got to study this. We've got to know what this does. Should they be giving this kind of feedback?
And so first thing we did was we gave it a name, post-identification feedback. And we created then a paradigm, and this paradigm has been used over and over and over again in various ways, and it's very powerful.
We first published this, what I'm going to show you, and these data, in 1998. It has really become a very productive area. Basically, all we do is we have people witness an event, and then we show them a lineup and get a lineup identification. In the data that I'm going to show you-- and we have variations on this-- but the data I'm going to show you, in a standard way in which this is done, is this lineup, this photo lineup, does not include the perpetrator. And we get a lot of mistaken identifications, as we typically do when we don't include the perpetrator.
And anyone who makes an identification then, we just randomly assign to get some kind of feedback. And the feedback is-- depending on how the random assignment comes out, because we don't care who they pick, they pick number one. We just care how the random assignment comes out, and they pick number four. If the random assignment comes out telling us to do this, then we'll give confirming feedback and say, you know, good, you identified the suspect.
We're not doing cartwheels. We're not applauding. Right? Just a comment. Good. You identified the suspect. Just like that. All right?
Or in some cases, we give disconfirming feedback to find out what happens with that. Say, oh, actually the suspect was number x. So if they say three, then we say, oh, well, actually the suspect was number five. Or they say five, oh, actually the suspect was number three. It just depends on how the random assignment comes out. Or according to the random assignment, nothing, no feedback, no response at all. OK?
And we really had to train our experimenters. What you find is that if you even just say "OK" after they made their ID, they construe of it as confirming feedback. There does seem to be a bias here, by the way-- as kind of a side story-- to construe feedback as confirming unless you explicitly make it disconfirming, or you really, literally, absolutely say nothing, which is what we train them to do now.
So then we take various measures. You can think of this as testimony, because in fact, these are important testimonial variables at trial about the witnessed event-- I'll show you some examples of questions here in a minute-- and about their lineup identification. So questions like, which is the important one that we're going to primarily focus, but not exclusively-- how certain were you at the time of your identification that you identified the real gunman?
Now, keep in mind, this feedback does not occur until after they've already made their identification. OK? So those in the confirming feedback, disconfirming, and control condition, they should report that they were equally certain at the time they made their identification, because it was randomly assigned after they'd already made their identification. Right?
How good was the view you had of the gunman? OK. Well, obviously, the three conditions ought to-- we gave them all three the same view, all right? How closely were you paying attention to the gunman? How well could make out details of the gunman's face? How easy was it for you to identify the gunman? How good a basis did you think you had for making an identification?
All these are important testimony variables. And in fact, these three, about certainly, view, and attention, are three variables that the US Supreme Court singled out said those, those are important variables. Those are variables that speak to the reliability-- directly to the reliability of the witness. OK?
Well, as you might expect, what happens here is that all of these things, all of these measures, are driven and influenced by the feedback. But I'm going to give you a sense of just how robust these effects are by showing you the data in a different form.
We were talking at lunch today about different ways to describe data in ways that are meaningful, I think, more meaningful for this specific problem in this case. And in this case, what I think is quite meaningful, to give you a sense of the power of these manipulations to influence, in this case, certainty or view or what they say about being able to make out details of the face and so on is the percent of these witnesses who now score at the extreme.
That's very important. Remember, I said that if the witness is not certain, you're not going to see them in court. Right? The prosecutor's not going to call them, there are not going to be charges or whatever. Right? But if the witness is certain, then they will.
So we just took, and we put in a cutoff. We said, well, what percentage of witnesses are scoring at the extreme on certainty? Remember, all these witnesses are wrong. They've all made mistaken identifications. OK?
And what you find here is that about-- in this particular experiment-- about, whatever that is, 12% or so have false certainty in the control condition. They say, I was certain, and I'm still certain, but I was certain at the time of my identification, and they're wrong. OK?
Only a couple of percent, one or two, are saying that they had a great view, which is good, because we gave them a lousy view, and only 1% or 2% are saying you could make out details of the face, which is also good, because they could not make out details of the face. Right?
But in the confirming condition, where, remember, the whole manipulation is just is simply a comment, like, oh good, you identified suspect, what you find now is that 50% of these witnesses say that they're positive, and they were positive all along.
Over 25% of them say they had a great view. Almost 20% said they could make out details of the face. And all of these are retrospective distortions that cannot be true. It cannot be true. So look at the magnitude of this effect. This is huge. All of this, going on in here, that's all manufactured false confidence.
It's manufactured in the sense that whereas this is something that we did not-- we don't know where that comes from, why are these witnesses positive even though they're wrong, we know exactly where this comes from. It came from the comment. Right? It came from the post-identification feedback. All right?
I mean, we haven't tested the applause. We don't need to, because anything, I mean, just any kind of confirmation, no matter how dry that confirmation is, produces these huge effects like this.
One of the things that we did was to now, right afterward, ask witnesses, hey-- and this is now looking at confirming and disconfirming. We couldn't do this for the control condition. But just simply ask them, hey, you know, first of all, we asked them, what were you told? Right.
And they're very good, immediately, at reporting, because we're doing this right after they were told and right after they answer these sets of questions. They're very good at being able to say, oh, I was told that, you know, that I identified the actual suspect, or I was told that, oh, that I identified the wrong person, that it was this other person. So they're pretty good at that.
And then, we asked them, did that information, that feedback that you got, did it influence you? Now, most people say no. It didn't influence me. That's the majority response. In this study, I think it was about 65% said no, didn't influence me at all. About 35% said it did on the certainty question.
On the question about view, which also had big effects, virtually everybody says no. So they only tend to answer yes to that when they think about it, possibly about certainty.
So we divided them into two groups, those who said, yeah, it influenced me, those who said it did not influence me. And we just simply looked at the question, well, did it influence them? Right. So we're just comparing the confirming to the disconfirming, as you can see. And in fact, that interaction is not significant.
It turns out people who say it didn't influence them were just as influenced as people who say it did influence them, suggesting that they don't really have an ability to know whether it influenced them or not.
Back to the Cotton case and Jennifer Thompson-Cannino. She says something very interesting today. I've known her now for about 15 years, and we're good friends. But there's an interesting thing that she says. She says, that "After I picked it out--" and by "it" she means Ronald Cotton's photo--" "they looked at me--" i.e., Detective Gauldin, who was the detective in that case, plus another detective who was with him-- "and said, 'We thought this might be the one.'"
Now, there's the feedback. Nothing unusual about that. I mean, that's routine. I mean, that happens like-- except in these reformed jurisdictions-- like 99% of the time. They're just going to get feedback. Right? Nothing unusual about that.
What I find intriguing is she says "For me, that was a huge amount of relief." Really? Relief? Relief from what?
Well, it seems obvious to me it's relief from uncertainty. And yet, she's absolutely positive at trial. Right? And so she remembers having experienced relief. The only way that could have been relief is she couldn't have been positive. That feedback had to have influenced her, right? Otherwise, why she's feeling relief?
Now, the effect, there was a meta-analysis published in 2006. There will be a new meta-analysis coming along soon here, showing-- and the reason is because in 2006 there were 20 published experiments with 2,500 participant witnesses. Studies done in, not only my studies, but studies done other places across the country, and Australia, South Africa, Great Britain, New Zealand. But now, just here we are, slightly less than three years later, and this number has more than doubled. So it's a big literature, in some ways.
If we look at measures of effect size in terms of d-- and if you know what d is, kind of a standard, it's the number of standard deviations between two means. And if you know that literature on effect sizes and you know that the standard significant effect in psychology journals has a d of about 0.23. And 0.2 is considered a small effect, 0.5 a medium effect, and 0.8 a large effect. And not very many things in psychology have large effects.
Well, uncertainty, the average across that meta-analysis is 0.8. View, attention, basis, ease, these other questions, the speed, what they say about the speed with which they made their identifications.
David Dunning's done some great work on eyewitness identification and the pop-out effect. What we find is that if you give them confirming feedback, it increases the percentage of witnesses who say that it popped out. OK? So it's really distorting memory, their willingness to give an ID, willingness to testify.
One of the things that's really interesting is Dan Wright and his colleague Skagerberg in the UK-- although Dan is now at an American university-- but they got the cooperation of law enforcement in the UK. Whenever they would do a lineup-- these were live lineups-- although they were not allowed to ask the certainty question, their equivalent of prosecutors over there said, no way you're asking witnesses about certainty, but you know, you save that for us. We'll ask that question, so that they can ask it in, you know, you're positive, right?
But what happens in this study is an ID is made by real witnesses, many of whom were victims, to serious crimes. And then they're either asked questions about how good of a view they had, how long of a time they took to make the ID, how well they could make out details of the face, or they're asked questions about how much attention they paid during the witnessing of the crime, how easy it was for them to make an ID.
Event. I can't remember what that question is. That's an short version of the term.
And then they get debriefed by the detective in the case, and just simply told, oh, you identified the suspect, or you identified a filler. OK? And like we usually in archival data, witnesses are identifying fillers from lineups about 25% of the time, which in itself is an interesting phenomena.
Then those who are asked this question about view, time, and face were asked these questions. Those who were asked this question prior to getting debriefed were asked these questions, so that you can compare these answers to those answers, and these answers to those answers to look at the effect of, in this case, the feedback.
And what you find is the same kind of thing with real witnesses. On the view question, for instance, you can see that there's this kind of difference. This is before being debriefed on identifications of the suspect. You find this kind of difference-- between, I'm sorry, this kind of difference between suspect and filler ID. So they are more confident in their suspect IDs than their filler IDs, even before being debriefed. But after being debriefed, the difference gets bigger, as you would expect. OK?
So the same kinds of effect's being observed there. The d of about 0.38 with real witnesses and d of about 0.54, with witnesses on that question. The certainty question usually has stronger effect. So the fact that they couldn't answer it, it would have even been probably a stronger effect.
But what I want to talk about then is why does this effect occur? Our original interpretation was that eyewitnesses simply do not make online judgments of these things. They're not making any online judgments about how certain they are, how good their view was, how long they took to make that identification decision, how much attention they paid, and so on and so forth. No online judgments are made.
So instead what happens is when you later then ask them, well, how certain were you, how good was your view, how long did it take to make an identification, it's an inference process. I mean, you have no record, you have no mental record of those things, so you make an inference based on available information. Well, if feedback is obviously a very salient form of available information, so the feedback is driving that inference process.
We've actually modified this, because we don't think we can justify that theoretical model, or perhaps even ever test it. So we backed off a little and say, well, we don't know if they make online judgments or not.
All we're saying is we don't think they have an accessible memory trace for how certain they were, how much attention they paid, how long it took them to make an identification decision and so on, so they rely on currently available cues to make the inference. And when you provide them with feedback, that is a profoundly strong cue. And so their answer is based on that, because they don't have anything else to base it on. Right? That's the theoretical idea.
Well, what evidence do we have to support that theoretical notion? Well, it lines up pretty well. It's hard to test it directly through mediation kinds of analyzes. We haven't quite figured out how to do that. But what we did was we said, well, if this account is correct, what happens if we were to create an accessible trace, OK, prior to the feedback?
So we had them engage in deliberate, private thought about their view, attention, and certainly before receiving feedback. Private thought. They're not coming out and answering questions about certainty and view and so on. All we do is we ask them after they make their identification, think about how good or bad your view was, or how much attention you paid, how certain you were. And so they're just privately thinking about it. And we just give them some time to privately think about it and then give them feedback. Right?
So the idea is that if it's the inaccessibility of any kind of trace, they should now have a trace. Right? Something they could rely upon. Now they know, when asked how certain were you at the time of your identification, they have a basis for answering it, other than just relying on the feedback. And we call this a feedback-- we hypothesize this might be a feedback prophylactic.
Notice also that if it comes out the way-- and it doesn't, I'll just tell you right now-- which we thought, it also rules out self-presentation, because there's no reason. If they're just trying to make themselves look good by saying, oh, yeah, I knew all along, well they could still do that even in the private thought conditions.
And so what happens is the witnesses, they witnessed the event and they make an identification, they receive confirming feedback or no feedback, and then we measured their self-reports of retrospective certainty, view, and attention, and so on. But in some conditions, we stick in this manipulation, where they think privately about their certainty, view, and attention prior to getting that feedback. We also had a condition in which they thought private about certainty, view, and attention after getting the feedback.
And basically what happens here is there's the standard effect, the post-identification feedback effect. With prior thought thrown in, looking at no feedback versus feedback, what happens is, first of all, you can see that thought alone-- well, I have to show you both-- thought alone actually produces some inflation of certainty. OK?
But notice that it also, basically, tends to eliminate their reliance on the feedback to infer their certain. So you find that that effect tends to go away. And there have been other studies since then that show that that's pretty much just, even though it looks like, hey, there's a little bit left, that's just pretty much gone. In other words, they are able to retrieve those prior thoughts now. In the case of post-thought feedback, again thought itself tends to promote inflated confidence.
Now, another reason why we think that people don't have an accessible memory trace-- at least it's consistent with it-- is Neil Brewer and Carolyn Semmler and I did this study where we asked people-- we did a standard kind of experiment of the type we've already talked about, but then these witnesses are asked to indicate how certain are you right now that you made a correct identification? And they're also asked, how certain were you at the time of your identification, before you got this feedback?
And we also manipulate the order. So other witnesses are asking how certain were you before you got the feedback, and how certain are you right now? So everybody understands this distinction, right?
And we also looked, in this case, at confirming things like false rejections, control of false rejections, confirming correct IDs, confirming mistaken IDs, and so on. We looked at the whole gamut. And it turns out that they give you the same answers. They give you the same answers. They can't distinguish between their current certainty and their prior certainty.
Now, if Tom Gilovich we're here, he'd probably say, that's anchoring, and maybe it is. I think it's evidence that they don't have a prior trace, or an accessible one. All they know is how they feel right now, and they assume that that's how they felt before. All of these are basically, for the most part-- you can see maybe there's just a little bit of difference, but not in any kind of systematic way. The correlation between current and retrospective certainty, 0.83.
Finally, we've also looked-- I mean, it's not finally because I actually have a whole bunch of things to show, but I see I'm going to run out of time, and I can-- but we look also at things like delay. What happens if you give them feedback now, but you delay measuring it for 48 hours? You still get the effect.
What happens if you delay the feedback for 48 hours? So, you know, they participate, you bring them back two days later and say, oh, you identified the actual suspect, or give them no information. Same thing. It's very robust across that.
It may be the case that the magnitude of these is a little bit bigger. That is, delay actually enhances-- tends to enhance the effect. Some subsequent research by other labs have suggested that, in fact, delay does seem to enhance the effect. In our case, we didn't find it to be statistically significant, but certainly the trend was there.
Now, I'm going to-- because I know that, and I know that other people have other commitments too, so I'm going to skip over some other things that we've done. But David, I should tell you about-- I mean, maybe you've seen it, but I should tell you about this stuff too, because, you know, we actually borrowed from your work on counterfactual thinking and addition and subtraction to do this work, where we asked people to engage in counterfactual thinking. You know, what would happen if you hadn't been given this feedback, to see how they did.
But I have to skip over that. I'm going to get straight to conclusions, because I see my time running out.
The effect here is robust. And in fact, it's so robust, I think it's one of the reasons why so many researchers have picked up and said, hey, I want to do one of those studies, because it doesn't take large sample sizes to find significant effects. Right? Anytime you have big effects, you can use smaller samples and, boom, there it is, and it's statistically reliable.
It's highly replicable, huge effects, and it occurs across numerous important measures. I think one of the most interesting things about this is we're not just driving their certainty, we're driving what they have to say about how good their view was, how much attention they paid, all these kinds of retrospective self-reports.
It applies to real witnesses to serious crimes. It's not due to self-presentation, because otherwise why would witnesses attribute good performance to a good view? I mean, self-presentation accounts would say that you wouldn't find an effect on view, or, if anything in the opposite direction, where if you say, hey, you were right. Oh, yeah, in spite of the fact that I had a bad view.
But that's not what they do. You tell them they're right, or if you give them affirming feedback, they say, yeah, and I had a really good view. So they're almost discounting their own performance. So there's no pattern there suggesting self-presentation. And especially, why would private prior thought, that only they know about, prior to getting feedback, moderate the effect the way it does? So it's not self-presentation.
It appears to be an inference process that results from inaccessibility to the original traces, because we also see that current certainly, retrospective certainty nearly identical, suggesting they don't have a prior trace to be able to report on.
And they can't report on-- they can't sort themselves between those who were more influenced and those who were less influenced. And then there's the private thought effect.
Practical conclusions, this directly contradicts the standard practice of what are still 80-- what just a few years ago was 100% of jurisdictions in the United States, now still 80%, including, as far as I know, Ithaca, of letting detectives administer their own lineups.
You may or may not know that one of my biggest missions is to get detectives out of ever being able to administer their own lineups. It should always be done using a double-blind procedure.
In other words, the person who administers that lineup should be a neutral party, someone who does not know which person is the suspect, which ones are fillers. Just simply tests the witness and makes a record of all the witness's responses. So it supports this argument for the double-blind lineup, and for obtaining a clear record of the witness's confidence at the time of the identification with a double-blind lineup administrator.
That record is then discoverable by everybody. The defense can discover it. So had this been done-- just to take the Thomson case, which is considered a clean case by most analysts compared to most of the other DNA exoneration cases in terms of behavior of the lineup administrator-- I don't think Ronald Cotton would have been convicted, because I think that Jennifer Thompson was uncertain.
And that first double-blind detective-- not Detective Gauldin, who's a great guy, he meant no harm-- but Gauldin reacted to her choice. He said, that's the guy we thought it was. If it would have been a double-blind administrator, the double-blind administrator couldn't do that because they're like, maybe I'd be reinforcing her for picking filler, so I just got to ask her, how certain are you, Jennifer? And I think she would have said, I'm not very sure.
That's a matter of record. That goes to record. That's discoverable by the defense. If the prosecution wanted to bring it forward, I think the prosecutor wouldn't have brought it forward. Based on this line of work, double-blind is now required in New Jersey, North Carolina, Minneapolis, Boston, Denver, Dallas.
Dallas being the most recent convert here, which has surprised many people. Many people thought we would never get a Texas police department, actually, a number of other jurisdictions, as well. In fact, all told there are nine police departments in Texas that have adopted now double-blind, based on this, as well as other kinds of things like the instructions prior to viewing a lineup, and so on.
So that's where I will end, right there. OK? Thank you very much.
[APPLAUSE]
SPEAKER 1: This has been a presentation by Human Development Outreach and Extension at Cornell University.
Watch the full presentation with speaker footage and slides .
Gary L. Wells, professor of psychology at Iowa State University, discusses the phenomenon of mistaken eyewitness identification and the psychology of how these errors happen.