SPEAKER 1: This is a production of Cornell University.
YASAMIN MILLER: Welcome, everyone, to Cornell Research's third annual speaker series. And thank you so much for coming. My name's Yasamin Miller. I'm the director of the Survey Research Institute.
We're very pleased to be sponsoring this event on an annual basis. Our previous two speakers were Jon Krosnick from Stanford University. And he is a prominent expert on survey methodology. And then Larry Jacobs last year, from the University of Minnesota, who talked about presidential misuse of opinion polls.
But before we get to the highlighting of today's talk, just give me a few minutes to plug the Survey Research Institute, if you would. For those of you who are not familiar with our organization, let me just say, we have a really outstanding staff, some of whom are here today, who have juggled over the course of 13 years over 600 projects with over 350 clients here at Cornell and worldwide. Just some of our highlights-- in 2008, we launched a brand-new initiative, sponsored by the Office of the Vice Provost for Social Sciences, David Harris, which is called the Cornell National Social Survey. Vice Provost Harris, in an effort to enhance the social sciences here at Cornell, offered this opportunity for researchers at Cornell to collect survey data on a national sample of 1,000 adults.
And the long-term goal is to offer the survey on an annual basis to the researchers here. The data collection ended in December of this year. And it will soon be available to the Cornell community.
We're in our third year of a monthly business sentiment survey for the division of the budget in New York State. We conduct 500 interviews every month with businesses across the state. That data is given to the chief econometrician of New York State to help him in policymaking.
For those of you who don't know, we're currently in the midst of conducting 800 interviews for our seventh annual Empire State poll. It's the first omnibus of its kind, which surveys New York State residents on a wide range of economic, social, and political topics. The Empire State poll is taken on a new import, as we've transformed it into a vehicle to promote social science research at Cornell under the guidance of the Vice Provost, David Harris, and SRI's advisory committee, some of whom are in this room today. And one will be speaking.
And an omnibus survey means that researchers can actually submit questions to participate in this survey. So they don't have to conduct a whole survey on their own, as it's more cost effective. To participate, you have to submit your questions in the fall. And then we launch every February and March of the data collection.
So having said that, there is food in the back. Feel free to get some, maybe between sessions or after. But now, let's get to the point that we're all here, talking about how do we know if people are lying on a survey. And if they are, what do you do about it?
Well, to help us understand this conundrum, we've gathered a very distinguished panel of speakers who will tackle this issue from varying perspectives. I would like to ask everyone if we could save our questions till the end. And I'll just introduce our panelists one at a time.
Our first panelist holds a lifetime endowed chair in developmental psychology at Cornell. He's an expert in the development of intelligence and memory. He studied the accuracy of children's classroom testimony as it applies to allegations of physical abuse and neglect.
He received his BA from the University of Delaware, his MA from the University of Pennsylvania, and PhD from the University of Exeter in England. He's the author of over 350 articles, books, and chapters. And I did a social science citation index, and he's got over 3,000 citations of his work.
STEPHEN CECI: 5,000, but who's counting?
YASAMIN MILLER: I stand corrected. I'm sorry. He's received numerous honors and scientific awards, including a Senior Fulbright-Hays Fellowship and an NIH Research Career Scientists Award, just to name a few. We don't have time to go into everyone's backgrounds.
He's appeared in all the major national news medias, including ABC's 20/20, NBC's Dateline, and CBS's Fifth Estate. He's currently a member of seven editorial boards and a fellow of seven different divisions of the American Psychological Association, as well as a fellow of the American Association of Applied and Preventive Psychology. His 1996 book on intelligence was awarded the William James Award for Excellence in Psychology. Will you please join me in welcoming our first panelist, Professor Stephen Ceci.
STEPHEN CECI: Thank you, Yasamin.
Thank you. I was kidding about the 5,000. I wish I had 5,000 cites. Before I start, I have a disclosure to make. And that is I'm not a survey methodologist. Some of my doctoral students are in the audience. And they're probably wondering, what is he doing here?
I'm, as Yasamin said, a developmental psychologist. But occasionally over the years, my colleagues and I and my doctoral students have conducted surveys as part of the experiments of social phenomenon. And a number of times, the results baffled us. It seemed as though the respondents were behaving inconsistently. And so I just want to share with you two examples of what I mean by this and how we tried to unravel what was going on.
So the two situations are ones where the questions engender some kind of conscious protective response, especially in a moral or ethical or emotion-laden domain where there's a threat to someone's identity or self-esteem. And I'll give you an example of a study we published in 2006 that deals with that. And then if there's time, I'll talk about something that's probably pre-attentive, not to say totally unconscious. But it isn't the kind of deliberate, effortful, conscious process that you see in the 2006 study.
And this is a study that I did 25 years ago. We published it in the Public Opinion Quarterly. And it has to do with the way context effects in the questionnaire prime these pre-attentive processes. Again, if I have time, we'll go into there.
Back in the mid '60s, my colleagues and I were doing a study in which we embedded a survey. And we were using Cornell students in a large psychology pool. And we asked the students for permission to look up their transcripts-- their grades, their SATs, and so on. And they gave us permission to do that.
But we also asked them to fill in among the socio-demographic information, we were collecting there SATs and grade point average and so on. And the format was just like this. What's your highest SAT verbal, quantitative, and analytic? And they put it in. And probably, I shouldn't have been surprised to discover when I went and checked the actual transcripts that there was some inflation, that there was about 6% higher on average claim of the SAT scores than what the transcripts indicated their highest score actually was.
A couple months later, we were doing another study. And we used a similar type of socio-demographic questionnaire. And a lot of the same kids who were in the previous study had signed up for this one. And the only difference was this time we asked them to circle the range that their SATs fell in.
So, for example, if they had a 620, they would circle the 601-625 range. And the inflation, we discovered, was a lot greater when we used this format. And there were these linear trends within it. That is to say, as your real SAT score got near the category boundary, there was a tendency to jump to the next highest one. And about 25% of the respondents did that.
So we shouldn't have been surprised, as I said, because there's a tremendous social-psychological literature that would have led us to predict that people tend to up their own status and deflate that of others. But it was a rather dramatic example of these kind of format effects. So with that as the backdrop, I want to share with you the results of a study that I did with Wendy Williams here at Cornell and Mueller-Johnson at Cambridge University in 2006.
And the study surveyed-- there were actually three different waves. But the main wave surveyed 2,700 faculty at 50 top colleges and universities in America. And at each of these 50 colleges and universities, we surveyed 18 different disciplines-- 6 in the social sciences, 6 in the humanities and arts, 6 in the physical natural sciences. And within each of those, each rank-- assistant, associate, and full professor. We randomly selected two at each rank.
And the first was in the first wave of the 2,700 that we sent out. And 1,047 gave us completed data. I think that's around a 37% or something response rate.
And then we surveyed another random sample from the same pool that we paid. We gave them a $35 debit card, and we got a 90% participation rate. And their responses were virtually identical. The confidence intervals fell almost perfectly on top of the ones from the 37% response rate. And we did some statistical imputation things to further determine that the results were generalizable within this sampling strategy.
So there were three formats. In the main format, we said to the people, here are some vignettes of things that might happen in your department. And what do you think the typical colleague would do if this happened when he or she was in the department?
And there were different versions. Some of the versions said, a colleague who is an assistant professor, a colleague who is an associate professor, a colleague who is a full professor. And the people, again, getting each of these, a third were assistant professors, a third were associates, and so on. So that's one format. What would the typical colleague in your department do if they confronted this dilemma?
Another format was people-- this was in between subjects-- what would you do if this happened? And then finally, instead of a free report, which of the following options would you choose if this happened when you were in the department? So for example, one of the questions says, you've uncovered credible evidence that a senior colleague in your department's been having a sexual relationship with an undergraduate woman in his class. What would you do?
Other people would get a similar one, except they'd say, which of the following options would you choose? And I'll show you the options in a minute. And then finally, others would get it like, Assistant Professor X in your department's uncovered credible evidence of a senior colleague having this affair with a student. What would the typical assistant professor in your department do? So that's the difference in the formatting.
So you can see, the people who got the options-- and this was the large sample of 1,143-- the options ranged. I'm showing you just four. There were seven to nine. But some of them were fillers.
But the options ranged from very strong things, like, I would confront the person. I'd go to his office and say, what the hell are you doing? My advisee was in my office crying, you know, and blah, blah, blah, blah. To very weak options like, I would ignore it. I would keep quiet. I'd put a pamphlet in his mailbox explaining the university's policy on sexual harassment.
Other ones are Professor Y heard a senior colleague boast that she'd relocated a $700 espresso maker purchased on a federal grant for office use to her home. What would the typical professor-- assistant, associate, full professor-- in your department do? And finally, Professor G discovered a senior colleague in his or her department's published falsified data. What would the typical assistant professor in your department do?
And again, the options can be from ignoring it, keeping quiet, to challenging the person, filing a formal complaint about it. And just to cartoon this, summed over these three types of formats, you see something that I think social psychologists would have expected. And that is to say, when you ask people, what would you do, they describe themselves more bravely than when you ask them what the typical person would do in your department, even a typical person at their same rank at their department. And I'll show you some data on that in a minute.
And then intermediate but closer to what you typically do is when you give them options. They're still braver than they think their colleagues are, but they're not quite as bold and brazened as they would be when you ask for a free response. And this just shows you that the data looked pretty similar across ranks, although the disparity between how people rated themselves and how they rated their colleagues was greater among assistant professors than it was among full professors. And that's all these vertical gradients indicate there.
And then finally, there were a number of cases where we had data from two people in the same department at the same rank. So they'd be both assistant professors-- that's the bottom bars-- or full professors at the top bars. And this is interesting because in here, you can see how they think their colleague would behave, who's the only other person at their rank in their department, versus how they'd behave.
And the bottom shows you they think they're a lot braver than another assistant professor in their department. And of course, their colleagues-- the other assistant professor-- feels just the reverse. And the top shows you for full professors, it's not quite as disparate, but the effect still is there. And then I won't go into that.
So OK. The second class of responses that troubled us aren't that kind of deliberate, conscious process of downgrading someone or upgrading yourself but rather something that I described as sort of pre-attentive. And I don't know if any of you read Timothy Wilson's Strangers to Ourselves. But he summarizes a lot of the research on unconscious cognitive processing and makes the point that an awful lot of cognitive work gets done below the radar, below the mental patrol level.
So some years ago, my colleagues and I did the study I mentioned at the outset in Public Opinion Quarterly, where I think something like that was going on. And I'll show you in a minute. But before I do, I wanted to give you a real quick demonstration of how you can get these pre-attentive mechanisms.
And here is a pretty simple thing. Try it, OK? Pick a number. Don't tell anybody. Just keep it in your head.
Got it? Got it? Got it? Got it?
Got it? Got it? Got it? Got it?
Not there yet. Got it? Got it?
Got it? Got it?
How many of you got that? And I'm sure you're not aware of how you got that. But let me show you another one that's even simpler.
Got it? Got it? Got it?
Got it? Got it? Got it?
How many of you picked seven? Well, more than chance because would be-- there's six numbers between there and would have been 16 and 2/3. How many did it-- got seven again?
This is good. You're coming to a seminar on lying, and half of you are lying.
No. Seriously though, it is more than chance would predict. And this is what I think people like Tim Wilson mean about this pre-attentive processing. There's stuff that's being primed often, in our case in questions that we're asking, that are preparing certain responses or privileging certain responses.
So let me describe quickly this study that we did back in the mid-1980s. In the 1980 election between Reagan and Carter, it was described in numerous polls in the spring leading up to the election that there were a lot of undecideds or weakly decideds. And you can see here, we chose 304 Midwest college students, and we did this in March of the election year.
And this group seemed pretty similar to national poll results in terms of their opinions on various foreign and domestic issues. So they were assigned to one of nine different conditions. But essentially, the conditions are either they're told at time one that Carter has a commanding lead in the poll. So you can see, "The results of a recent opinion survey of college-educated persons showed Carter commanding a substantial lead over Reagan." So that's a plus C, a plus Carter.
And then five days later, they're surveyed on a telephone by someone who purports to be calling from some national public opinion survey. And the questions are all reworded and so on. But the person is essentially asking them to give their preference, if the election were held today, who they'd vote for. They're also asked to do that at time one when the person says that Carter has a commanding lead.
So when a person calls five days later, they tell a third of the group that Carter has a commanding lead. They tell another third that Reagan has a commanding lead. And the final third they don't tell anything.
And similarly, the group that gets that Reagan has a commanding lead at time one, five days later is told either Reagan has a commanding lead or Carter does or no information. And then finally, there's no information at time one, and then later they're told either Carter or Reagan or nothing.
So that's the basic design. So there's nine different cells in it. And at each time, the Midwest college students are asked, "If the 1980 presidential election were held today between Carter and Reagan, who would you vote for?" And how strong is your preference?
And what we found was there was a lot of switching as a function-- really quite systematic function-- of the information that we primed them with. But the switching largely occurred among people who were undecided or very weakly decided. If people felt strongly one way or another, the manipulation didn't really budge them. But among those who were weakly decided, there was a good deal of switching.
So for example, if you look at the left-most one, these are students who one day hear that Carter has a commanding lead and then five days later, a pollster calls them and tells them that, according to national opinion polls, Reagan has a lead and we're wondering if college students resemble the national voter poll that we've taken. And you can see, there's tremendous switching. And what they do is they switch away from Reagan toward Carter.
They seem to have an implicit idea that even though I'm sort of undecided, I don't believe it should be a commanding lead for Reagan. So they switch. We called it jumping on the bandwagon with the underdog because you can see it, same with when they're told Reagan has a commanding lead and then five days later told Carter, they switched toward Reagan, which is the second bar. There's a little bit of switching toward Carter but not much.
And anyway, without going into a lot of the statistical gyrations on this, we were able to determine because of comparisons with the group that gets no information, either time one or time two at both times, that it really is an oppositional effect. It wasn't that the information pushed you more toward someone. It just pushed you away from the person that you were biased about at time two.
And finally, let me just say that none of this really should have surprised us, as I said. There's a lot of social-psychological information here. And I see one of my colleagues, Dave Dunning, in the audience. And Dave and Eply have a couple papers back in the 2000-2004 series that show that when you're asking people for behavioral forecast that they tend to inflate their own-- in terms of how much would you contribute to a charity, how much effort would you put into a project and so on. But they're fairly accurate at forecasting the amount that someone else that they know would contribute to a project or a charity.
And that being the case, we think going back over both of these studies, that the more accurate information when we're asking people about these ethical dilemmas is what they say their colleagues would do rather than what they claim they, themselves, would do. Thank you.
YASAMIN MILLER: OK.
YASAMIN MILLER: Thank you. That was very interesting. So our second panelist is the director of surveys and producer for CBS News. Before that, she was the manager for surveys for CBS News, she was responsible for the management and budgeting for the unit that designs and conducts CBS News and CBS News New York Times poll. As an aside, that's the oldest print-broadcast polling partnership in the US and only one of two news polls in the US that retain in-house research capabilities.
She's now a senior producer for CBS News election broadcasts and led the election decision team, which projects results for the US national and state elections. She is undergrad Cornell, PhD in political science from Rutgers. She was an assistant professor at Case Western Reserve, assistant professor and director of social science research at the University of Vermont, visiting professor and Telluride faculty member here at Cornell, and Professor in Residence at the Annenberg School for Communications.
She is past president of WAPOR, the World Association of Public Opinion Research, and AAPOR, and previous professional standards chair and counselor-at-large for both organizations. And in 2008, she was awarded the AAPOR Lifetime Achievement Award. And I've learned that she was only the fifth woman ever to receive this award. It's very prestigious.
She has numerous other awards-- again, too numerous to mention. And as president of these organizations, she's dealt with a wide range of public opinion research issues, including non-response, misuse of election polls, poor reporting of polls, and other threats of conducting public opinion. Please join me in welcoming Kathleen Frankovic.
KATHLEEN FRANKOVIC: Thank you. Thank you. You're going to find that I'm going to be talking about a very specific incidence of lying, or assumed lying, to pollsters and that there will be a lot that's reflective of what Professor Ceci mentioned. To do this right, I have to talk about the role of race in the election of 2008 and its role in determining not just race's role but racial attitudes' role in determining how people voted for president in 2008 before I address some of the history of how public opinion polls have tried to measure these kinds of questions. And then get into the question of the Bradley effect.
What is it? Did people lie about their vote? And why did we expect that there would be a Bradley effect? Even sometimes there was not a lot of confirming data, which I'll show you in a bit.
Just for review, white and black voters in the 2000 presidential election from the NEP exit polls-- as you can see, white voters made up just about 3/4 of the voters. And they split 55% for McCain, 43% for Obama. Very typical.
Whites haven't voted for a Democrat for president since 1964. African-American voters 13% of the total. 95% of them voted for Obama.
And to reference that, this is not necessarily unusual, although African-American turnout was exceptionally high in 2008. Their share-- 9 out of 10 voted for Kerry. And among white voters, 41% voted for Kerry in 2008. So Barack Obama did better, actually, with both groups than Kerry had as a Democratic nominee in 2004.
But that's race and voting, and that's just the numbers. What do we know about racial attitudes in voting? Well, there's a new article published in The Forum by Paul Sniderman and Edward Stiglitz, "Race and Moral Character of the Modern American Experience," where they gave people a set of questions, five items that asked about people's belief in negative stereotypes about most African-Americans and then a nine-item index that asked about positive feelings. Do you think most blacks are hardworking? Do you think most blacks are intelligent, et cetera?
And what they found is quite interesting because anti-black prejudice did play a role in the election. It did decrease support for Obama but only among Democrats and, to a lesser extent, independents. Positive attitudes about African-Americans increased support for Obama among Democrats and independents, but there was absolutely no influence of these indices on how Republicans voted. This is the data.
Taking that index of prejudice, five items looking at the lowest third-- the people who scored lowest on prejudice, the people who scored highest, the people in the middle-- this is white voters only-- you see that among Democrats, the difference in support for Barack Obama was 33 points, among independents, 15 points. In other words, from 95% of Democrats expressing the lowest amount of prejudice, down to 62% of Democrats expressing the highest amount of anti-black prejudice. I should note, however, that even though they expressed the high level of anti-black prejudice, 62% of them actually ended up voting for Barack Obama. Clearly able to separate out general attitudes toward support for a presidential candidate.
But look at the Republicans. It doesn't matter what you think. The difference between 11% support and 7% support is certainly not significant. And the same thing happens when we look at those measures of esteem-- positive feelings about African-Americans.
There's a difference for Democrats and independents. There is no difference-- no significant difference-- when you look at how Republicans voted, no matter what they thought about African-Americans. Party trumped-- clearly, being a Republican trumped any of your feelings about blacks.
Now, the National Election Pool exit polls also tried to measure this and asked people to tell us about themselves. And in deciding your vote for president today, was the race of the candidate the single most important factor, one of several important factors, a minor factor, or not a factor? I should say that about 8 in 10 voters said race was not a factor at all.
African-Americans were more likely than whites to cite race as important. It's not a surprise. Catholics in 1960 talked about the importance of electing a Catholic as president.
Whites who cited race as important were less likely to vote for Obama than other whites. And only 33% of them who said that race was an important factor gave either answer one or answer two to that question, voted for Obama. 67% voted for John McCain. That's a difference of about 12 points from-- no, 10 points from whites overall, 43% of whom voted for Obama.
But the pattern that Stiglitz and Sniderman found also shows up in the exit poll, that the difference in support for Barack Obama among white voters by how they answered by party and by how they answered that question about how important was race to your vote showed up only for Democrats and independents. There was basically no impact of it for Republican voters. So stuff was happening here. Democrats were affected by race. Independents were affected by race. They were perfectly capable of telling us-- some of them, at least-- that this was something that mattered to them.
So clearly, many characteristics matter in how people vote. So do racial attitudes. They certainly mattered for some voters in 2008. Voters expressed their feelings fairly accurately, we think.
But we've worried for years about truthfulness in predicting the vote for an African-American candidate. And polls are not academic surveys. They're done for news purposes. They're done often more simply, more straightforwardly than an academic study might be. So it's been a struggle to try to figure out how you measure all of these questions.
And historically, the Gallup organization started asking a question, if your party nominated a generally well-qualified man for president, would you vote for him if he happened to be a Negro? Now, I use that language because that was the language. They started asking this question in 1958.
I point out that it's your party. It's well-qualified, and it's a man. You'll see why in a minute. Starting in 1958, 37% of Americans said they would, under these circumstances, vote for an African-American. It increased in the 1960s somewhat.
Gallup stopped asking the question, but the General Social Survey did. If your party nominated a black for president, would you vote for him if he were qualified for the job? And as you see, by the time we get to 1992, 90% of Americans say they would do this if he were qualified for the job.
I show these coming-up questions because they predate 1958. They go back to the 1930s. Nobody was asking about blacks running for president. Nobody. But they were asking about women.
This is a Gallup poll in 1937. Would you vote for a woman for president if she were qualified in every other respect? I underscore--
I underlined the word "qualified" because three years later, another organization asked this. Would you vote for a woman?
And you see that the word "qualified" probably helped 13% of Americans say, well, yeah, OK. And this kind of pattern continued. It continued through the 1960s.
This is a question that Gallup asked of women only. How would you feel about having a woman as president of the United States? And women, 63% of women, said that they would disapprove. And my favorite question, there won't be a woman president of the United States for a long time, and that's probably just as well. And 67% of women approved of that.
And this is the kind of questions that were being asked. They were able to tell you some things about the public. Something changed in the 1970s. You saw the number in the Gallup poll rise for people saying they'd be willing to vote for a qualified black for president in the 1970s.
This is a Time magazine poll that was taken in 1979. And it asked, "Considering the state of the country and the world today, do you think it would be good for the country or bad for the country to have as president someone who was"-- and I'll start with these few-- a business executive, 60% said yes; a priest, 14%; a college professor, only 40% thought that would be a good thing for the country; a Jew, 46%; an atheist, 19%. All of these, except for the business executive, not very particularly positive answers, until you get to woman and black, where they do as well-- just about as well as that business executive does and much better than the college professor, a whole lot better than an atheist or a priest.
You wonder if something happened in the 1970s. You wonder if something happened in the 1970s that actually changed attitudes or if something happened in the 1970s that affected the way people answered poll questions, whether all of a sudden it became socially unacceptable to say that you wouldn't do certain kinds of things.
And this is important because it leads to the whole discussion this year about whether people were lying to pollsters. This reflects what you just saw in Professor Ceci's presentation. Three questions-- these are from a January survey that we conducted.
Would you personally vote for a candidate who is black for president? Do you think most people you know would vote for a black candidate for president? Do you think America is ready to elect a black president?
And similarly, the sorts of gaps that we saw in the last presentation happening here. 90% of registered voters said they would, but only 65% thought most people they knew would. And 54% said, America was ready to do this.
And so this was done in January. This was before it was obvious that Barack Obama would be the candidate. And so this is. But this is reflective, and we've seen this all the way through the campaign. And we've seen this in the past. You judge yourself more positively than you judge other people. What's the right answer? Does the answer to "most people you know" really tell you about yourself? OK.
So all of these things lead up to the fascination this year in 2008 with the Bradley effect. There's a little confusion in the discussion of it between sorting out the impact of racial feelings and the impact of lying to pollsters and giving us socially desirable answers when they might not really be true. And something else was happening this year. There was the instinctive attribution of racial motives to many who disagree with us. And I'll show you some data about that in a minute.
Where did the Bradley effect come from? Why has it entered the political discussion? Why are we even talking about it?
If this works-- hopefully-- I'm going to show you a video from CBSNews.com. There it is.
-A lot has been made this year about people potentially lying to pollsters over issues like race. I was surprised that the almost knee-jerk response to this was to reach back 25 years for an explanation. So let's talk about that 1982 gubernatorial election in California.
A pre-election poll conducted by the Field Poll had Tom Bradley, then the mayor of Los Angeles, leading George Deukmejian, the Republican candidate for governor by a good [INAUDIBLE] margin. And on election day, Deukmejian beat Bradley by a point.
-Ladies and gentlemen, George Deukmejian.
-And thus was born the Bradley effect. A black candidate, Tom Bradley, was ahead in the polls but lost on election day. What we don't know was whether there was last-minute change among voters, whether there was some campaign event that might've changed things, or whether in fact the field poll was an accurate representation of what voters were going to do then.
There may have been some confirmation of this in 1989 when Doug Wilder, who ran for governor of Virginia, looked to have a healthy lead in some pre-election polls, won narrowly. And David Dinkins looked to have a healthy lead in New York City. Ran for mayor of New York City, and won narrowly.
-Thank you very much.
-I intend to be the mayor of all the people of New York.
-The thing is that there's been no indication of this pattern in races since the early 1990s. In 2006, Deval Patrick's pre-election poll margin in Massachusetts looked pretty much like it did on election day. Harold Ford was running behind before the election in Tennessee, lost by about the margin he was running behind.
So we've got no current evidence. Now, a lot could change between now and election day. There could be campaign events. People could be making up their minds late. But I don't think we're seeing the, quote, unquote, "Bradley effect."
Here's some differences. In the 1980s, we had issues like crime and welfare that were very racialized issues. We were within-- less than 20 years from the passage of the Voting Rights Act, the Civil Rights Act. We were closer to the urban riots of 1968. So these issues were very much in people's minds.
These things are now 40 years in the past. And lots of voters don't even remember them. They weren't even alive. Barack Obama was a child during that period. So I think that you have to recognize that times have changed.
I did that in October for CBSNews.com. Part of that analysis was helped along by an article by Daniel Hopkins, up at Harvard, who actually did the analysis and looked at all the pre-election polls from the 1980s through the 2006 election and put it out on the Harvard website. And it was available to people.
It was the case that there was no evidence. There's been no evidence of it since the early 1990s. But we clearly thought it was there. And so we spent a lot of effort this year trying to measure it.
And one of the ways we felt we could measure it was by looking to see if there was a race of interviewer effect. In other words, when people were interviewed by a black interviewer, did they say different things? When white people were interviewed by a black interviewer, did they say different things than when they were interviewed by a white interviewer?
These are data from the spring polls from February to June, aggregated. These are black respondents by the race of the person who interviewed them. There's no difference, as you can see, in expressing their support for Obama versus John McCain. This is before Obama got the nomination.
White respondents, however, did show a difference. The difference is that-- and these are big numbers of cases-- so a seven-point lead for McCain when whites were interviewed by a white interviewer turned into a one-point lead when they were interviewed by black interviewers. There's a lot more analysis of this to do. But it seemed to be, even then, restricted to only certain kinds of respondents.
Republicans, no difference based on who interviewed them. Independents, small difference. Democrats, the largest difference, in terms of answering whether they were voting for Obama or McCain, depending on who talked to them.
When it came to education, little difference-- some difference for college-educated people. Much bigger difference among people with less than a high school education. And those people with postgraduate education, actually, it went the other direction. More likely to say they were voting for Obama if they were being interviewed by a white interviewer.
Age-- no impact for older voters. No impact for those under 45. Those baby boomers, who happen to have come of age during the civil rights movement and during the 1960s, actually were the people for where it showed up the most. Where were they? They were in the Northeast. Think of this. People here-- it's people like me in this room now, in terms of potentially having the biggest effect.
But the good news-- those polls came from February through June-- is that it didn't continue. It was expressed in the spring and not in the fall. We actually saw a race of interviewer effects in the Obama-Clinton question in 2007. They disappeared in 2008, and there was no race of interviewer effects in all four fall general-election CBS News polls. So that was a good sign.
But people continued to believe in the Bradley effect. They really thought it was there. I, just last week, 234,000 Google pages, Bradley effect and Obama.
And Obama supporters continually use the Bradley effect to encourage turnout. Democratic pollsters would talk about how a seven-point lead in the polls wasn't going to be enough. That was going to be wiped out. There was going to be this Bradley effect.
There's obviously a partisan reason to do this. It encourages turnout. And commentators, it seemed, couldn't resist discussing race.
My husband sent out that video link to a lot of our friends, most of whom are well-educated people of a certain age who-- they live all over the country. But they're fairly liberal. And several of them came back and said, well, that's really nice. Kathy looks really good. But boy, no, I know it's going to happen. I know it's going to happen.
Why did people assume it was going to take place? Well, one of the things that we found out that many Americans, especially Obama voters, attributed racial motivations to people who disagreed with them. Here's a pair of questions we asked in October.
We did it in two surveys. Is there anyone you know who supports Barack Obama mainly because Obama is black? Is there anyone you know who does not support Barack Obama mainly because Obama is black?
In early October, about a quarter gave a yes answer of people overall said, yes, they knew somebody. They knew somebody on either side. But there was a difference based on who they were supporting. This is attributing motivation-- knowing somebody voting on race by Obama voters and McCain voters. And you're going to see two numbers here-- knowing somebody who is voting for Obama because of his race, knowing someone who is voting against Obama because of his race.
There's no difference among McCain voters. About 30% say they know somebody on both sides. 40% of people who supported Barack Obama claimed that there was someone they knew who wasn't voting for him mainly because he was black. Just 13% said they knew somebody voting for him mainly because he was black, ignoring the positive attitudes about African-Americans could help you vote for Obama.
No. This was more than two to one. This is a pretty sizable number. It's a number that needs a lot more analysis. But it does suggest that we want to assume-- and it's Obama voters, so it's probably those Democrats and independents. We want to assume that race plays a role with people who don't agree with us, less likely with people who agree with us.
So was there a Bradley effect in 2008? Some indication that that might happen in the early part of the campaign. That was gone by the fall. But discussion didn't go away.
And so I guess I'm going to say that race matters. That didn't mean people lied about their 2008 vote. There is no evidence of a Bradley effect, no matter how much the discussion was about it. But I am sure it's going to live with us for at least another couple of decades, until we get rid of that.
I mean, people are perfectly capable of saying they're not voting-- they were perfectly capable of saying they were voting for Barack Obama and of giving reasons for it. They may have been motivated by race. But they weren't lying to us about it. Thanks.
YASAMIN MILLER: Thank you.
Our third panelist is the Charles Horton Cooley Collegiate Professor of Psychology at the University of Michigan, professor of marketing at the Ross School of Business, and research professor at Michigan's Institute for Social Research. He received his PhD in sociology from the University of Mannheim, Germany, and a Habilitation in psychology from the University of Heidelberg.
Now, I actually had to look that up because I didn't know what that was. And it's actually the highest academic qualification a person can achieve after getting a doctorate. So I don't know how many other people knew that. But it's only in Europe and Asia.
So prior to joining Michigan, he taught psychology at the University of Heidelberg and served as scientific director of ZUMA. And that's an interdisciplinary social science research center in Mannheim. His research interests focus on human judgment and cognition, including the interplay of feeling and thinking, the socially situated nature of cognition, and the implications of basic cognitive and communicative processes for public opinion, consumer behavior, and social science research. He was awarded the Wellmont Medal of the German Psychological Association Contributions to Psychology-- sorry for messing that up-- and the Thomas M. Ostrom Award of the Person Memory Interest Group for Contributions to Social Cognition. And again, just naming a few awards because this whole panel is very distinguished.
He's a member of numerous scientific organizations, including a fellow of the American Academy of Arts and Sciences and the Association for Psychological Sciences. And please welcome Professor Norbert Schwarz.
NORBERT SCHWARZ: Thanks.
Well, do people lie in surveys? Let me find out where this thing moves here. From the traditional survey researcher's perspective, that's very simple. There's a tacit assumption underlying much of survey research that we rarely talk about but that late colleague, Angus Campbell, has very clearly articulated. You assume that people know what they do, they know what they believe, and they can tell you with candor and accuracy.
And from that perspective, there's two things that you have to do. You have to make sure that accuracy is possible by asking questions that are meaningful and that people know something about, rather than questions that they can't answer anyway. And you have to do it in a way that allows for candor, which basically means you want the answers to be confidential. And if you can do it at all, you want them to remain anonymous. That's the idea.
If you do that, then any discrepancies that you observe between the behavioral reports and the records that you can access, or between the reports they give in different circumstances, suggest that they are not quite being candid. And the reports in different circumstances means things like giving different answers to different interviewers, giving different answers when the survey is sponsored by different sponsors, that says as a survey done by the Democratic Party versus a Republican Party, being influenced by question order, and so on. And the two standard interpretations are first, they don't tell you what they know. That has been certainly the most popular one for many years of survey research.
Or maybe they didn't quite do the work. So instead of sitting there and really thinking about what they know, they give you some snap answers that's highly influenced by something that it shouldn't be influenced by. So that's the standard assumptions. That underlies the concern about lying in surveys. And that makes lying in surveys such a popular topic.
I will argue that, in fact, we don't know if people lie in surveys and that it's terribly difficult and surprisingly hard to figure out if they lie in surveys. We know from lots of psychological research that already showed up in what Steve Ceci said that memory and judgment are highly constructive. Your facts and your beliefs are not sitting there in memory ready for use, and you just retrieve them. You have to think about these things.
In most cases, you have some foggy idea of what you do and what you believe. But how much you did it, how often you did it, how strongly you believe it-- that all depends on what you happen to think about at that moment, what the alternatives are that you considered, and so on. So by and large, the answers are formed when you're asked, so based on what comes to mind at that time.
There are strong contextual influences. And the discrepancies that you observe between different interviewers, between different service settings, between different question orders, and so on do not necessarily mean that anybody is lying. They may or may not be lying. So sheer fact that you have a discrepancy doesn't tell you that.
In fact, when you assume-- when you depose, for the sake of the arguments, that perhaps they actually don't know, a lot of other things make sense. And we find empirically that reports shift as a function of context, even when people have no reason to lie-- there is no reason for self-presentation because it's fully anonymous-- or when the measure is not transparent, and people actually wouldn't know how to lie, when you have no idea what you should do in this situation to look good. And I'll give you a few examples of that. And I take the examples from the areas that has mostly revitalized the interest in lying, which is a Bradley effect and the ways of interviewer effects and so on. So it's a case of racial attitudes.
In many, many service studies, you find that white respondents report more favorable attitudes towards blacks as a group-- other words, African-Americans in general-- when the interviewer is black than when the interviewer is white. Similarly, African-Americans report more favorable attitudes towards whites when the interviewer is white than when the interview is black. That is typically interpreted as evidence for lying or perhaps evidence for politeness.
And let me remind you, Howard Schuman has said for many, many years, I'm not sure why everybody is upset about this. I mean, if need be, I'd much rather prefer them to be polite to my face than nasty to my face. So it's not clear if you should really be upset when negative racial attitudes are under-reported in interracial interviews. There's something to be said for politeness.
On the other hand, there's a different perspective on this. You could say, well, think about it. How do people make judgments about the out group? In many cases, you can show in psychological experiments that it's heavily based on the examplars-- so members that come to mind at that point of time, what's considered prototypical.
So here you are sitting-- and these older studies, which were all conducted face to face-- you're sitting in your living room, chatting, having a pleasant conversation with this nice, well-educated, middle-aged black woman. And you like it. So how do you feel about blacks?
Well, this is nice. [LAUGHS] Not surprising that your attitude would be more positive than if I had asked you the same question after you just crossed the street to avoid the two teenage boys. A very different situation, and you would expect that you're getting these kinds of context effects.
So let's look at some experimental data that illustrates these kinds of things. And I'm picking ones that are minimal manipulations. So here are a few situations with no need to lie. So there's full anonymity. There's no social interaction. There's not even a need to be polite.
In this study, we had participants do a number of silly things-- experiments where I'm picking this one where we told them that we are interested in how well they can estimate the height of celebrities. And you think about celebrities, and you guess how tall they are.
And the manipulation here is that the list of celebrities we give you includes no African-American or one of a number of well-liked African-Americans at that time, for example, Michael Jordan or Oprah Winfrey and so on. The only difference is there's no African-American among the four, or the last one that you get is either Michael Jordan or Oprah Winfrey or so on. You guess how tall these guys are. And a few questions later, you get items from the modern racism scale, and where a high score means that you have adverse beliefs about race.
Data are very straightforward. These are college students. They're not terribly racist, but they get moderate scores on the modern racism scale. This is when there's no African-American on the list.
And if there's one African-American on the list that they estimate the height for, their adverse beliefs about race drop, and they're less racist, meaning this looks, thinking about the height of Michael Jordan has the same effect as being interviewed by a black interviewer. But you have no reason here to be polite to the guy who's sitting there or to be lying because it's completely anonymous. Nobody knows who you are.
So if a liked examplar is brought to mind, the group overall is evaluated more favorably because the liked examplars that I'm bringing to mind in this experiment is a member of the group, becomes part of your representation of the group, which makes your judgment of the group more positive. At the same time-- and again, arguing against the lying thing-- liked examplars are good for the group but bad for other examplars. Tom Gilovich is a wonderful example of a psychologist. But if he's not your most liked psychologist, then compared to your most liked psychologist, Tom isn't that great anymore, all right? So you get these standard effects, where an outstanding examplar is good for the group but actually bad for other individual examplars for which he becomes a standard of comparison.
Here's a study in which we use Martin Luther King Day as a naturalistic manipulation for that. So MLK Day brings to mind Martin Luther King, obviously, as well as positive norms about race relations. Under a judgment logic, you would expect that Martin Luther King Day helps the perception of African-Americans as a group but hurts African-American politicians because most of them just are no Martin Luther King. On the other hand, if you lose a desirability logic, then Martin Luther King Day surely should teach you that this is a day to get with the program and not tell people about your racist attitudes, which should help both African-Americans as a group and black politicians.
We did this as a web experiment around MLK Day 2004. We assigned students randomly to answer the questions on a Monday two weeks before MLK Day, on MLK Day, or two weeks after MLK Day. And these are different students, white, randomly assigned to conditions. And we are asking some questions, either about African-Americans as a group or towards Colin Powell as a politician or Jesse-- I think Jesse Jackson.
In this case, high numbers mean a positive evaluation. Before MLK Day, students feel about African-Americans as a group relatively neutral, 4.5 on a 9-point scale. On MLK Day, the group attitude goes up to 5.3. Two weeks after MLK Day, MLK is off your mind, and everything comes back to baseline, OK? Not terribly surprising.
Colin Powell is more positive before MLK Day than the group in general, 6.4. On MLK Day, Colin Powell drops. He's no MLK, after all.
But once MLK is off your mind, he's doing fine again. All right? These are context effects where exposure to like examplars produces more positive evaluations of the group but more negative evaluations of other specific group members for which your examplar can serve as a standard. We get these kinds of things under conditions of full anonymity and no interracial interactions that would prompt social desirability, politeness, and so on.
Now, that may not be the whole story. While survey researchers are worried that you may lie to the interviewer, basically assuming that you know what you think but you're not going to tell them, psychologists actually worry that you want to lie to yourself so that, in fact, you don't even want to admit your racist attitudes to yourself and that you would rather see yourself positively and not only present yourself positively. And if that were the case, then anonymity would not be the answer because you would still know about yourself. So what you would need in this case is something like nontransparent measures and measures where basically you can assume that if you have no insight into your response-- what your response means-- then you actually do not have to confront the nasty side of yourself.
And a number of measures have done that, the implicit measures of attitudes, which promised to capture people's true attitude, even if they don't want to tell you what they assumably know, they may possibly not want to admit it to themselves, and they may not even know it themselves. Now these are big promises. And if you're skeptical, so am I.
The advantages of these measures presumably are that the measure is not transparent, participants are not aware of what they reveal, and the responses are outside of strategic control. There's many variants of this, and they're all described in that book that I'm advertising up there in the corner. And one of the best known ones is [INAUDIBLE] procedure, which goes roughly like this. And I'm simplifying things a bit.
Your task is you're shown a word like "awful" or "pleasant." And your task is to say as fast as you can if awful is a good word or a bad word, if pleasant is a good word or a bad word. You hit the key, whether this is a word with a good or bad meaning. The trick is that before you see the word "awful," "pleasant," and other such relatively valence but neutral words, there's a little flash. And hidden in that flash, masked in that, is the attitude object.
So in this case, you're flashed black or your flashed white and other cues for racial groups. And of interest is whether those racial cues in the little flash that's presented reasonably outside of your awareness influences how fast you are in saying that awful is a bad word. And this has essentially an affective congruency effect, such that when I flash at you groups that you dislike-- a bad group-- you are faster in recognizing a bad word as bad and slower in responding to a good word that it is good, whereas when I flash at you a good group, it slows you down in saying that a bad word is bad, and it speeds you up in saying that a good word is good.
To lie on that thing is reasonably tough. You would have to understand that if they're flashing at me something that I don't quite see, and that thing says "black," then I should be really, really slow in saying awful is a bad word and really fast in saying good is a bad word. And I need to adjust my responses by a few milliseconds, which is a range in which this thing works, right?
So you can assume that lying is not there, which gives us hope that there are no self-presentation effects, and the context effects would go away. And you would really see people's true attitudes. And you would get insight into stuff that they don't even know about themselves. OK.
The surprise for researchers who did this work was that you get the same context dependency. And you get it under conditions where people probably and honestly wouldn't even know how to lie. So there's less negativity when you have just been exposed to liked black examplars, which parallels the experiments that I showed you, which we had done 10 years earlier.
This is a study by Dasgupta and Greenwald. There's less negativity when the experimenter who is in the room is black rather than white. And that parallels the race of interviewer effects in surveys, except that in this case, it's very hard to assume that somehow your participant is sitting there saying, I'm not going to reveal to this nice black guy that I'm really racist. And I'll hit my keys differentially fast in a logic that they actually don't understand.
So what does this kind of thing mean? I think it means that lying is not the story. Attitude reports capture current evaluations that are formed in a specific context, not stable things that people know.
And the observed contextual variation does not necessarily indicate lying. Surely, there can be conceptual variations where people will eventually lie. But we are much too fast, and we are much oversimplifying things by jumping to the lying story.
There's a number of research approaches that look at this, which you can think about as situated evaluations. One of them is done here by Melissa Ferguson, one of your colleagues. And we all assume that evaluating is for doing, and it's geared towards the present situation.
When you think about it, what should a system of evaluation look like that helps you make it through the day? It should actually be highly context sensitive. You wouldn't want to rely on your knowledge from years ago. You would actually want it to help you orient yourself right now, in this situation, in this specific context, with the goals that you have on mind now.
So it should override recent experience. It should privilege stuff that is relevant now. While being informed by the past, it shouldn't be driven by your past experiences.
And there's a number of models of attitude constructions that do that kind of thing. And they predict systematic patterns of context effects and their size. And I took the risk of anticipating what Kathy was going to say, which kind of worked out based on a summary she had sent us earlier.
The less people know about the attitude object-- Obama-- which is when you go back to the primaries, right? Obama was relatively unknown-- the more they would have to rely on their general attitude towards the group. You don't know much about the guy, but you know that he's African-American. I mean, everybody's hammering that home. And so you would be more likely to dwell on the group membership information at that point.
Moreover, if your choice is between Obama and Clinton, many policy issues become non-discriminators because they actually agree on most of these things, again, making the group membership much more relevant because on other things you have fewer distinctions. So in that situation, you might assume-- or we would have to predict theoretically, had we had to make a prediction, which we didn't do, so it works perfectly in retrospect. But we would have to predict that the interviewer effect-- is the race of interviewer effect is bigger because if group membership is more prominent as an attribute in that decision situation, then the group membership information brought to mind in the interview situations with the race of interviewer would have more weight. As knowledge about Obama increases and the choice alternative becomes McCain, where you have actual policy differences rather than just group membership differences, the influence of race of interviewer should go down because the relevance of group membership should go down. And that would roughly give you those kinds of patterns.
So do respondents lie? Well, I suppose, yes, sometimes. If you ask people, have you recently killed your spouse and they have, they probably are not going to tell you. But many other things that we have quickly and very comfortably assumed that people lie when there is any discrepancy as a function of race of interviewer, context, and so on, I think we probably have been wrong.
We do actually not have the evidence that these things are lies. We may just as well think about them as contextual influences on judgments that people make in set situation. And that reflects their actual feelings and preferences at that point in time.
And unfortunately, as a classic example of lying in surveys, which is a race of interviewer effects, may very well turn out to be the worst examples that you can pick if you wanted to make the point that people lie. And that's my story.
YASAMIN MILLER: Wonderful, thank you.
I'd like to thank all the panelists and then open this up to the audience. Can we have questions?
MARCUS: How you doing? My name is Marcus Walter. I'm a first-year grad student and atmospheric scientist here at Cornell.
But I had a question. So I've noticed recently, just for most of the time, whenever I go online and look at certain websites where they allow you to comment, it seems like people are really way more brutally honest about a lot of things when they write online, versus, I think, talking to the person. And that goes along with what you all were saying.
Have you all noticed, I guess, a better trend or are more focused to do online surveys versus in-person, just because you get that honesty, I guess [INAUDIBLE].
KATHLEEN FRANKOVIC: Well, I mean, when you look at what people write online, you also have to deal with the fact that the people who are motivated to write online are probably different from the average person. But thank you for raising that because that first survey that I showed, the Sniderman and Stiglitz survey, where racial attitudes were such a big component of the analysis, was actually conducted using the Knowledge Networks Panel, which is a randomly selected set of people who are interviewed online. And if they don't have web access, they are given web access. So that was definitely a case where people were answering questions truly anonymously, not with any sort of interaction.
There certainly is an interest in pursuing surveys online. The poling community-- I mean, the people like myself, who were doing things for external consumption and for news purposes, we haven't gone there yet. We haven't gone there yet because of the nature of most online panels, which are all opt-in, which are basically self-selected.
So we still do telephone surveys. Now the Knowledge Networks Panel is an interesting, different kind of approach. And we've used it for certain things. We haven't used it for our regular polling.
NORBERT SCHWARZ: Let me. That's a good question because it illustrates so nicely how tricky this business is. So, I mean, aside from the self-selection, obviously, I mean, the people who write in on blogs, I mean, are much more committed and probably much more extreme. But even there, what would that mean? What would be the evidence for being more honest or less honest?
So many of these people who would give you very extreme opinions would probably not follow through on them in everyday life. They would probably not voice them with the same extremity or necessarily act on them. And what are then considering their no true response?
Or in, like, Steve's studies, right? I mean, what would be the behavior against which you compare that? And again, it reminds me of the Howard Schuman saying, I'd much rather have someone with racist beliefs who doesn't act on them than someone who does act on them. Now are we going to go out and tell the racist who manages to stay polite that he's really a nasty guy because on top of it he's lying, as well? [LAUGHS]
YASAMIN MILLER: Carl.
CARL: Thanks. Some great presentations. I'd like to follow up on what Professor Schwarz said, though, about somebody might lie about, say, if they've committed murder. And in your examples, all three of you have had to deal with lying about issues of relatively minor importance or about projected behavior.
And I'm curious if I could follow up. And then I'll add the work that my colleagues and I do has to do with surveys of antisocial behavior. We have a survey with Yasamin that's about to go in the field on the cheerful topic of elder abuse, for example.
And one of the things that we have found, for example, in surveying nursing home staff about their own potentially abusive behaviors is, first of all, how surprisingly honest a lot of people are about even very stigmatizing things. But I'm curious, briefly, to what extent is there research on the accuracy of self-report in areas like child abuse, wife abuse, and criminal behavior? Do we know much about that?
A pregnant silence.
KATHLEEN FRANKOVIC: I don't know the data. I don't know.
NORBERT SCHWARZ: It's very hard how you would find out. I mean, it's very hard how you would find out. I mean, the typical approach to something like this is you take a sample of people who were caught driving under the influence, right. And you call them up and you ask them if they have ever been caught driving under the influence or other such things-- if you ever drive after drinking, and so on.
And you find under-reporting, usually not dramatic, but somewhere in the range of 20%. But that's what you find for everything else. So when you ask people, have you been eating out at a restaurant last week, and you compare it to their credit cards, they're wrong on that one, too.
So which of these things do you now attribute to lying? No? The restaurant thing, we say, oh, that's memory. The driving under the influence or do you ever drive when you're drunk, a year later, that's not memory?
Charlie Cannell in the mid-'60s has done a study-- of record check study-- on patients in a hospital. And he found that within one month, overnight hospitalized patient, where you stayed in hospital at least for one night, was under-reported by about 40%. Now that does not necessarily mean that they completely forgot that they were in the hospital.
But now when you say, within the last 30 days, it seems so far ago. This was, like, a few months ago. Yeah, yeah, I was but not within the last 30 days.
You make these memory errors. Things seem often misdated. Most surveys actually ask you to date them correctly because they're asking for a time frame. And so there's many, many ways in which you can get erroneous reports, which may or may not reflect lying.
And I think we're too quick to say it's lying when it's negative and to say it's memory when it's not negative. And we have no clue.
KATHLEEN FRANKOVIC: Yeah, if there's a literature, I don't really know. But there's a literature about survey techniques that involve asking people to choose from a list of things and tell you how many things they've done. Included in that is one of these bad behaviors.
And then comparing that with people asked a list excluding that behavior. And the assumption is that the number will increase, and the increased number overall will give you a measure. And frankly, I just don't know that literature.
NORBERT SCHWARZ: I mean, yes. I mean, the answer is that you get somewhat higher responses with this. So, I mean, you say, have you done any of theses behaviors? And you have five or six behaviors. And so the difference would be that sixth behavior-- I mean something like that. There's various ways of doing that.
And you got somewhat higher responses for highly memorable more or less criminal behaviors.
KATHLEEN FRANKOVIC: [INAUDIBLE]
NORBERT SCHWARZ: Yes.
STEPHEN CECI: And in self-protective situations, our lab has done a number of studies with children where you're watching them through one-way. You're filming through one-way glass. And they're told not to touch something, not to open a box that has a present in it.
And the researcher leaves the room. And we have it on film that many of the kids violate the instruction. And they do look in the box.
And when they're interviewed later and they're asked, a very high percentage of children lie. When I say, a high percentage, it's, I don't know off the top of my head, but it's over half. So it's a self-protective type of lying where this person's saying, did you, when I left the room, violate the instructions? And the kids-- you see them right on tape just lying.
The reason I think it's lying, Norbert, in that situation is they do a lot of other things, as well. And there is some noise in what they're reporting. But it's nothing like the 50% or 60%.
NORBERT SCHWARZ: And it's also so close--
STEPHEN CECI: It is.
NORBERT SCHWARZ: --so close in time.
STEPHEN CECI: Yeah, it's not a memory. Yeah.
SPEAKER 2: What happens in cases where categorization of a class is a little bit more ambiguous? So in case, if you're on the phone with someone, you hear a little bit of an accent or an affection and you're not quite sure, or race is a very dynamic thing where you're not quite sure of a race, does the mind tend to fill in and try to impute these things? Or what happens? Or is there an effect?
KATHLEEN FRANKOVIC: Well, I think you raise a very important question. The early race of interviewer effect studies were done with in-person interviewers. So there was visual confirmation.
Now, with most research being done by the telephone, you're relying on vocal patterns, accents, and sounds. And the more sensitized you are to that-- to race-- the more you may be looking for clues-- you as a respondent.
And that probably-- I never thought of this before. But that probably means that race of interviewer effects in general might have been more extreme in the 1980s and in the 1970s than they are today. There have been people who have gone at the end of a survey and had interviewers ask respondents, what race do you think I am? I find that a very awkward question.
We did this in New York City about 20 years ago. And a lot of people basically say, oh, I don't know. And you know they have a guess. But they're not willing to say. What did happen--
NORBERT SCHWARZ: It's even better when you say, what sex do you think I am?
KATHLEEN FRANKOVIC: Oh no! [LAUGHS] But--
NORBERT SCHWARZ: I've seen that.
KATHLEEN FRANKOVIC: When they do answer that, they did give the right answer, most of the time. And obviously, it really depends on the interviewer and the sound of the interviewer. There are going to be people who you're not going to be able to classify-- nobody would be able to classify. So we just use the self-categorization of the interviewer in this analysis.
It does seem to suggest that people, at least early on, were-- many people, at least-- had some idea.
SPEAKER 2: But does the respondent try to figure this out? In other words, if they're put in a position where there's a ambiguous [INAUDIBLE], does the respondent try to impute or figure that out? In other words, does ambiguity leave the respondent in a position where they need to fill in some blanks? And does that filling in the blank affect their response?
KATHLEEN FRANKOVIC: If it's important to the respondent in terms of this interaction, it's a person-to-person interaction. And for some people, you're going to want to have a sense of who you're talking to. And you're going to start doing exactly what you're doing. For other people, it doesn't matter.
SPEAKER 3: I'm a research assistant at the Survey Research Institute. And most of what I do at work is calling people on the phone and performing interviews. What I'm wondering is since I'm not a part of the interview design process, and a lot of the things that you talked about are things that can be applied at that stage, what things, other than the standard stressing, anonymity stressing-- just a freedom to participate-- what could I do to increase the accuracy of response?
KATHLEEN FRANKOVIC: Not response rate-- the accuracy of response?
SPEAKER 3: Yeah.
NORBERT SCHWARZ: Speak slowly.
Leave people time to think about stuff. Don't engage explicitly in, I mean, confirmation or disconfirmation without being so funnily silent that it's also conversationally awkward. I mean, many interviewers try to not-- I mean, you can do one of two things, right?
You could say, oh, I agree. I mean, you don't want to go there. Or, really? I mean--
--all of these things happen. You don't want to go there. But you also don't want to be completely silent because in normal conversations, that's also not a thing.
You have to say something like, I see, thank you, or other such things to move this along. And most of all, on things like memory questions, it takes people time. Encourage them to take their time.
Charlie Cannell and his colleagues at Michigan have done many studies in which they show that making the question redundant helps. I mean, it's nothing you can do as the interviewer. But it illustrates this time thing.
So if I say, how many days have you been sick last month, that's much worse than if you say, our next question is about illness. We're interested in whether you have been sick. Think back about the last months. Thinking of just the last months, how many days last month have you been ill?
That gives people time to get there. And instead, most of the questions we write are very short. And then the interviewer is also very fast. And things that people know and could be reasonably accurate on get screwed up.
KATHLEEN FRANKOVIC: I think that this is a big difference between what academic survey researchers can do and what people who are out there in the commercial world can do. Our interviewers hate those long questions.
NORBERT SCHWARZ: Of course.
KATHLEEN FRANKOVIC: They just hate them.
NORBERT SCHWARZ: Of course.
KATHLEEN FRANKOVIC: And so--
NORBERT SCHWARZ: And you pay them by the interview and not by the hour.
KATHLEEN FRANKOVIC: No, we don't.
NORBERT SCHWARZ: No, you don't?
KATHLEEN FRANKOVIC: No. No, that would be wrong. But because it is time consuming and it does sound redundant and it does sound repetitive, but it means that we're not the best way of measuring some of the things that have to do with memory without major events. We're not good when it comes to, have you done something in the last 30 days, the last 60 days. People telescope in time. They do all sorts of things with time.
But one thing that I would note is that-- and again, this is commercial research firms. House to house, there are differences between firms and their expectations. How willing are you to accept a "don't know" answer, an "I'm undecided" answer? There are some people who believe you have to press people on every single question.
I think, personally, that pressing people on every single question, people learn. They know that if they say, I don't know, they're going to hear this question again, however how long it is. And it's going to prolong the interview.
So people will start giving you responses. Do you want that? I mean, these are judgments that you have to think through when you're in the process of doing a survey that really affect accuracy.
STEPHEN CECI: One thing we have-- not we, our lab, but other people in the field have found over and over with interviews with kids is if you repeat a question, you're looking for trouble because it targets to them that they think you think they gave the wrong answer the first time. And you see a lot of switching when the question's repeated.
YASAMIN MILLER: Yes.
LARRY: My name is Larry [INAUDIBLE]. And I work over in the Employment Disability Institute. And one of the projects we're working on is actually surveying others about their attitudes towards individual disabilities.
KATHLEEN FRANKOVIC: And you're not working with us?
KATHLEEN FRANKOVIC: No question for you. I'm sorry. Next.
LARRY: But the literature has suggested one way to just help you interpret what people are saying or their attitudes is to incorporate social desirability items into your survey instrument. And what have y'all seen, or what's y'all's own attitudes about incorporating social desirability items to help you interpret whether or not these people are trying to portray themselves as being more desirable than they might actually be?
NORBERT SCHWARZ: What items?
KATHLEEN FRANKOVIC: Yeah, what items?
LARRY: Well, social desirability items, like, I think good thoughts about my colleagues-- always, sometimes, never. And so, I mean, there's a whole area of literature on social desirability that if someone is portraying themselves too good, then most likely, their responses to their other items are--
NORBERT SCHWARZ: So you're using something like model crown scale or the [INAUDIBLE] scale or--
NORBERT SCHWARZ: To the best of my knowledge, the individual differences on social desirability in the studies that provide the best evidence that-- it's a personality type study that provides the best evidence for that-- explains about 10% of the variance in highly loaded items, which is probably not-- it's probably not worth your time [LAUGHS] is my sense of that.
I mean, let me also emphasize, it's often not even clear what the socially desirable answer would be in these things.
YASAMIN MILLER: And we have time for two more. You and then Peter. [INAUDIBLE]
SPEAKER 4: I'm the librarian here on campus. And we often conduct surveys on user satisfaction, interest in services-- things like that. The surveys are generally either paper surveys or web surveys where there isn't a selected population. And self-selected individuals respond.
And it seems to have become an article of faith that you have to offer some sort of financial incentive in order to increase participation and response rate. So I'm curious, the only mention I heard of incentives was Professor Ceci's discussion about the faculty survey where the $35 debit card--
STEPHEN CECI: Yeah, but it turns out, it didn't really matter.
SPEAKER 4: Without affecting the--
STEPHEN CECI: Yeah. The unpaid people answered virtually the same as the ones we gave the debit cards to.
SPEAKER 4: Is there any evidence that offering an incentive creates any sort of desire to please the surveyor? And does it matter whether it's just a few, like, gift certificates to a small number of randomly selected participants or a situation where everyone who participates is given some?
STEPHEN CECI: I don't know.
KATHLEEN FRANKOVIC: I mean, there's a big literature on this. And Public Opinion Quarterly routinely reports articles on this. My problem with incentives is doing telephone surveys, and even doing web surveys, giving incentives gets rid of the whole notion of confidentiality because if you're going to mail something to somebody, you've got to know who they are.
And I think that's an issue that works against you. There's a lot of examples of cases where incentives don't do very much to increase response rate. There are some cases where it does.
There's a debate now when it comes to surveying people on cell phones, which many of us started doing in 2008 because you really don't want to be not representing a certain set of people. And the Pew Research Center, I understand, gives people $10-- pays people to do a cell phone interview.
We don't because of the fact that it requires the breaking down of the assumption of confidentiality. Does it matter? I don't know.
NORBERT SCHWARZ: I think this will soon be a historic issue, I mean, in the sense that you can do most incentives with electronic payments that would not require that.
KATHLEEN FRANKOVIC: That's true. And you'd send it to the phone, right.
NORBERT SCHWARZ: So after the interview, they would get minutes credited--
KATHLEEN FRANKOVIC: Right.
NORBERT SCHWARZ: --to their phone thing as a text message. We've done web surveys with incentives, where what you're getting at the end is a coupon code for an Amazon purchase. And we do not have to send it to you.
You get that code. It's in your interest to print it. If you don't print it, you can't use it.
It actually works out fine for the researcher. The rate of people actually using that coupon is only about 60%, so you're saving some bucks.
YASAMIN MILLER: We've done a couple of tests with the Empire State Poll and the Cornell National Social Survey and. Those results will come out shortly. But basically, the bottom line is we're not finding any significant increase in response rates. Now, we haven't looked at all the items.
But we do see within one cohort-- the young, under 25-- that does impact response rates. These are the difficult to reach, the cell-phone-only populations. But we've done this both at the state level and the national level. And so for the amount of work and money and lack of loss of confidentiality, it's not clear that it's worthwhile.
And we also did an interviewer incentive, as well, because these are phone interviews, and no impact.
SPEAKER 4: No evidence that it affects the nature of the responses?
YASAMIN MILLER: No. Peter, final question.
PETER: I was wondering whether or not there is a potential disconnect between some of what we saw in the first presentation and the last one. And so with this pretty substantial self-report versus what your colleague would do, and I think Professor Ceci was suggesting if you want to know what's more accurate, lean toward the report of what the colleague would do.
And then, in obviously different studies, but with interviewer effects and a very systematic, logical explanation of, well, if you have minimal experience with a particular racial group and then you have an interaction for an extended period of time, that would affect your results. And so context and systematic context effect-- and so especially when you compare two people of the same level, the same department, reporting in the same biased way, is there a disconnect between what you're reporting? Or how would we explain both of these outcomes?
STEPHEN CECI: Well, both ours and Kathy's, as well, where they're being asked, do you think a qualified black you'd vote for for president versus how about someone you know, you're also seeing that same kind of disconnect. And I guess where I was coming down was if you had to decide which is closer to what reality would be, go by their answer to what they think their friend would do rather than what they say they would do.
But I wasn't saying that that had to be defined as conscious lying, for example. I take Norbert's point that there are a lot of reasons people will flip-flop in their answers that probably are different from just deliberate lying.
NORBERT SCHWARZ: I would give just the same. I would give just the same answer. I mean, when you make these predictions, what would you do if you were in this other considerations? I mean, you have information about yourself that others don't have, including your momentary affective reaction to that, how much it irritates you, and other such things, which feed into your momentary judgment but which, on second thought, you may not consider that relevant.
So I think that the individual predictions would take context less into account, whereas when you're talking about others, many of your own internal reactions are not relevant. I do not know how much it pisses you off [INAUDIBLE] right. So I would go, in my prediction for you, with what I think the baseline would be.
But you may be swayed by your own affective reaction, except that when you think about this again and you talk it over with your colleagues if this would be a wise thing to do, you may, in the end, not do it. So both of these things would actually be honest answers. On the one hand, when you think about it now, you think you would do it. But whether you would really do it actually has a few other steps involved that you haven't yet considered when you're making his protection. I mean, that would be my take on it.
We've received your request
You will be notified by email when the transcript and captions are available. The process may take up to 5 business days. Please contact firstname.lastname@example.org if you have any questions about this request.
"Race did play a role in the 2008 presidential election," but not the way most people think, said Kathleen Frankovic '68, director of surveys and a producer for CBS News, at the Survey Research Institute's Annual Speaker Series at the ILR Conference Center, Jan. 21.
She was joined by panelists Stephen Ceci, Cornell professor of developmental psychology, and Norbert Schwarz, a psychology professor at the University of Michigan.