SPEAKER: What I'm going to present to you here today is just a tiny slice of what I've been doing for the last 16 years, I think much of which you will recognize. I have been paying attention in this room. I've borrowed some things from some talks here.
I say "our." I've been collaborating with a number of judges and a few other law professors on this. And what we've been doing is going around to judicial education conferences and trying to extract from sitting trial judges mostly-- a few appellate judges, but sitting trial judges-- these same kinds of phenomena that we get with undergraduates, MTurk samples and the like.
So the payoff here is just very applied. We're not really breaking new ground with any research. But we're trying to take phenomena like anchoring, framing, and the like, just sort of classics and a few new things, and then trying to see if trial judges in settings that are relevant to their work really rely on the same kinds of numeric estimates.
The real payoff here is-- this slide is actually data. Actually, there's over 5,000 trial judges in our sample. We have over 90% of the current sitting federal trial judges in our sample. 20% of the judges in America have been in our research. And we're kind of running out of places to look.
We've done it with Canadian judges. We have judges for the Netherlands and the like. I don't study lawyers quite so much. They're not nearly as much fun to study. They want to bill their time while they're sitting in a room. Judges, much better.
What we give them is, of course, hypothetical questions, as is often the case in this kind of research. And you'll see examples of that. One thing we have given them that's not quite legal, but in fact, I think it's interesting, we've given over 2,400 trial judges the cognitive reflection test to get a sense of whether they are more intuitive type thinkers or deliberative thinkers, recognizing that the CRT is a bit crude in its own way.
But I will tell you they love this. Judges love the CRT. I don't entirely know why, but they're very delighted to do this.
You know the CRT. I'll give you the first question here. This is a bat and a ball together cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost?
Lots of people say $0.10. This slide is a little out of whack here. That's the intuitive answer. The real answer is $0.5.
When I give this to judges, I say at this point, I know you don't believe me, because invariably, I will be cornered by a group of them afterwards that are sure it's $0.10, even after I've given them a lengthy explanation for why. The most recent is, I'm a tax court judge, and I know it's $0.10! OK. You know the other ones.
How well do judges do on this has been a question for us. This is a group of Florida judges who are fairly representative. We thought maybe judges would be more deliberative thinkers.
They've got to rely on precedent. They've got to rely on evidence. They've got to rely on a lot of rules. Maybe by proclivity, by training over time, they get to be more deliberative thinkers.
No, they don't, is the answer. They get most of the questions wrong. They average about 1 and 1/4 out of 3 right. I'll give you a comparison as to how good or bad that is.
They do choose the intuitive, wrong answers. They love the $0.10. 97% of the judges who get that first bat and ball question wrong say $0.10. A few other wackos say other things, but a lot of them say $0.10.
That goes down. They get a sense that something is amiss here, and they cast about for different answers on the other ones. But mostly they choose the intuitive answers.
And indeed, in this group of judges we asked, what percentage of judges in this room will get this question right? Those who got the question wrong said 90% of my colleagues will get this right on average. Those who got it right, more were more moderate. They were still overestimating this, but they understood that it was a little bit harder than they thought.
So judges, not so much deliberative thinkers. MIT students are, right? They're less fun at parties, but quite deliberative. Carnegie Mellon, it goes down a little. As you add in social scientists and art history majors and the like, you get more intuitive thinking.
This is kind of where the judges live. They like this, by the way, partly because-- almost as good as Harvard, right? They're OK with that.
And they tend to fit in that range. That's kind of how they do. They do better than people with nothing better to do than troll the internet looking for research to participate in, except for appellate judges.
We've repeatedly found this. When we have some appellate judges in the sample and trial judges, the appellate judges don't do as well on the CRT. One appellate judge, who was quite irate at getting one of them wrong, who is pretty well known and can't be identified, unfortunately, comes up afterwards and says, well, Professor-- which is the equivalent of "with all due respect, Your Honor" for me-- Professor, you didn't give us arguments either way.
I was baffled by that. There's no argument that it's $0.10. I mean, I don't know what-- but I think there's something to it. The appellate judges, in fact, always see arguments both ways. So this really takes them out of their element in a way. The trial judges kind of have to shoot from the hip a lot more, so they get a little bit better I think at stopping and thinking.
Oh, I will say a few things more about this. The paper I'm working on now with the CRT with all those judges, there is a pattern to which judges do better and worse. The worst group, and I noticed it, because the worst group of judges I've encountered in the 17 different states where I've collected this, is New York. They do well worse than-- I have to take this joke out. They get mad.
They get about 0.7 out of three right on average. Most of them get them all wrong, in fact. New York, what's the pattern? Elected judges. Judges who run in partisan elections do very poorly on the CRT relative to appointed judges.
I've had that for years in the data in a way, but only now am I really confident that that's holding up. We've got 17 different states. And I just had data from Indiana where half are elected, half are appointed. It's a weird state. And indeed, the appointed judges do better than the elected. They are just somewhat different at how they think about this sort of thing.
But what about legal settings? That's mostly what we've done. I've given over 100 different scenarios to these thousands of judges. We've done a lot of anchoring, because it works really well.
Here's a scenario we've given to judges. We picture them, they're at the education conference. They read a scenario. It involves, in this case, a civil rights violation. It's a story about a defendant who is a public sector employer, which brings him into a jurisdiction of a relevant statute.
The plaintiff is this woman who's a secretary. She's a terrific secretary, it says. But she gets a new supervisor who begins ridiculing her ancestry. She's Mexican-American. And when she complains, he fires her.
I'm not lawyers, but this is an excellent civil rights case. Of course, she's going to win. But she doesn't have any damages. Because she's a good secretary, she gets a job right away at another company. So she doesn't have the lost wages that usually accompany a claim like this.
But she's entitled to damages for mental anguish. And so we describe some of the anguish, the name calling and the like. It's a lot of detail in this particular scenario, not as much as a real trial, of course, but enough that the judges could assign a damage award for her.
What we vary is for half the judges there's some straight testimony at the end. When she's on the stand, she says she saw on a court television show where a plaintiff like her-- or it doesn't say like her-- received a compensatory damage award for mental anguish. Half the judges read that line. The other half, same line, but we put a number in there.
So half the judges here, hey, Judge Judy awarded someone $415,300. And what does it do to them? Well, without the anchor, the median award is $6,500. With the anchor it's eight times as high. The judges get quite excited, as you did, about that effect, and as I did too.
There's almost no overlap between the distribution of awards here. It's very potent. We gave this to a group of appellate judges, asking whether that testimony with the number, whether that testimony was admissible. Uniformly said it's not admissible testimony.
We asked also, would it be harmless error to admit it? 87% said it would be harmless error to admit this testimony. So it's something they don't quite appreciate.
We've played a lot with anchors. I got this one from something Tom did a few years ago. This one we did fairly recently. We asked municipal court judges in Ohio-- do a lot of traffic court, a lot of municipal fines and the like-- to assign a fine to a nightclub that had been violating noise ordinances. It was zoned properly, but it's operating too late.
And the statute says, look, you've got to impose a fine that reflects the degree of disruption that this entity created in the neighborhood and that would deter further offenses. And what we vary is the name of the club. For half of them it's called 58, after the street address. The other club, it's Club 11,866.
This is Ohio. These guys were modest. The fine doubled there. In Texas it was triple. It was $500 and $1,500. In Canada, where apparently noise ordinances are worse, it was $2,000 and $3,500.
So in fact, I love, judges put $11,866-- there's always someone who does that-- as the fine. And I never quite know whether they're playing with me or not, but it does stick in their heads. So anchoring works. Damage awards, criminal sentences, you name it. We get anchoring effects on judges.
Another thing we've studied-- I'll just go real quickly here. We've really tried to push judges around on emotional influences on legal rulings. So this is a hypothetical we gave judges in New York and in Canada.
This time we made up a statute-- we usually don't do that, but we had to do it for this-- involving medical marijuana. And we said, look, there's this defendant who is being prosecuted for marijuana possession. It's two marijuana cigarettes on the seat next to him. He claims it's medicinal. And there's a statute, new statute, we said, that allows you to use the marijuana and be immune from prosecution if a physician has stated in an affidavit that you have a medical use for it.
The problem is this defendant doesn't have such an affidavit, but he gets one after getting arrested. It's a little unclear how you interpret the statute. You might say "has stated" means you already have to have stated it. Past tense means past tense. Or it might mean "at some point has stated." The statute's ambiguous, and there are lots of theories about how you construct a statute in this context, none of which depend upon who the plaintiff is.
Defendant moves to dismiss this case, arguing the statute covers his post-arrest affidavit. And he's either a 19-year-old suffering from seizures or is a 55-year-old suffering from bone cancer. Well, guess what? The 19-year-old, 50-50. The 55-year-old, he's going home, right?
And you might say, well, they're more sympathetic to the 55-year-old. Of course they are. That was the point of manipulating that. But it shouldn't affect how they read that statute.
There are lots of ways to dismiss the 55-year-old. They can sentence him to time served or nothing. But instead, they rule on the statute differently, which suggests it matters a lot who the litigant is when the court is interpreting a statute one way or another.
I should just stop, because the rule is 10 minutes. But we've done a number of things like that where we push judges and motions around, did a lot of CRT stuff. I will say we're trying to correlate the CRT now with whether they rely more on emotions than not.
They are better at a variety of cognitive tricks, like they avoid the conjunction fallacy more. They avoid the over-reliance on-- they pay more attention to base rates if they're better at the CRT. So it sort of matters in that too. And that's where we're headed.
MODERATOR: Thank you, Jeff.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact email@example.com if you have any questions about this request.
Jeffrey Rachlinski, Henry Allen Mark Professor of Law at Cornell Law School, presents current research findings regarding behavioral economics and human decision-making Sept. 8, 2015 as part of the Behavioral Decision Research Workshop Showcase. Sponsored by the Department of Human Development and the Center for Behavioral Economics and Decision Research at Cornell University.