share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
MICHAEL MACY: Thank you all for coming to the AD White public lecture class by Duncan Watts. My name is Michael Macy. I'm in the departments of sociology and also in information science. And I'm here today-- I have the great pleasure of introducing Duncan.
Before I do that I just want to mention a little bit about the format. At the end of Duncan's remarks today there will be some time for you to ask questions if you like. And then after the talk there will be a reception right outside, including a book signing if you'd like to get a copy of one of Duncan's books.
Duncan is principal researcher and founding member of the Microsoft Research Lab in New York City. Before that he was a professor of sociology at Columbia, and later the director of the human social dynamics group and doctor of research. He's also a member of the external faculty at the Santa Fe Institute, and his research on social networks has been published in a number of leading journals, including Nature, Science, Physical Review Letters, American Journal of Sociology, Harvard Business Review, among many others.
He's also the author of three books, all of which are outside. The first is Six Degrees, The Science of a Connected Age; Small Worlds, The Dynamics of Networks between Order and Randomness; and most recently, Everything Is Obvious, Once You Know the Answer. His undergraduate degree was in physics, and he received his PhD in theoretical and applied mechanics at Cornell, working with Steve Strogatz.
As a graduate student his research addressed a question that has puzzled social scientists for three decades. And the question is, how how is it possible on a planet with nearly 7 billion people, that there are just six degrees of separation between you and any randomly-chosen person in the world? And makes this question even more puzzling is that most of us-- our social networks are a small circle of friends in which everybody is friends with everyone else. So how is it possible that you could get a message from you to some randomly-chosen person by giving it to a friend who gives it to a friend, and so on, in just six steps?
Well, Duncan discovered the answer, which is that it turns out it takes only a very small number of social ties to acquaintances-- like friends from high school or a distant relative-- to give even these highly clustered social networks the same small-world property of close connectedness that you would expect to find in a completely random network. And this discovery has come to be known as the "small worlds effect," with really important implications in fields ranging from advertising to the spread of epidemics. And it's also rekindled-- led to a resurgence of interest in what is often referred to as the "new science of networks," of which Duncan is a principal architect.
So it is my great pleasure to welcome back to Cornell one of our most distinguished and accomplished alumni, AD White professor Duncan Watts.
[APPLAUSE]
DUNCAN WATTS: Thanks, Michael. Thank you all for coming today. It's wonderful to be back here. It's a very special place for me. Thanks for that very informative introduction. I'm not going to talk about small world networks today, but I want to start by telling you-- and Michael has already given you some of my background-- but just sort of going a little bit over my own history to sort of lead in to what I want to talk to you about.
So, as Michael mentioned, I started out life as a physicist back in the late 1980s in Australia when I joined the Navy and went to the Australian Defense Force Academy. And I got my major in physics, and I wrote my undergraduate thesis on a topic that was a very hot topic back in the 1980s, called "chaos theory." And so back then Cornell was sort of known as a place where you could come and study non-linear dynamics, or chaos, and so I came here in 1993 to do that. And instead I ended up working with Steve, and we got on this sort of long detour that I'm still on studying the structure of social networks. And as Michael mentioned, that's what I wrote my PhD dissertation on.
And then, because there were so many interesting applications of network theory inside the social sciences, I actually switched to become a sociologist from engineering. And I did a few postdocs at Columbia and then in Santa Fe and at MIT, and then in 2000 I started teaching in the sociology department at Columbia, which was a very sort of interesting experience for someone who had never taken a class in sociology when I taught my first one.
And about seven years later I was just getting used to doing sociology and thinking of myself as a sociologist, and so of course then I left and I went to Yahoo research and now Microsoft Research where I now work mostly with computer scientists.
So the reason why I'm telling you all of this is that over the years and moving around all of these disciplines-- from physics to sociology into computer science-- has given me an unusual perspective on social science, both as an insider and also as an outsider. And for several years now I've been very interested in why it is that we treat the problems that sociologists think about very differently from problems that we encounter in other branches of science.
And if this seems like a puzzling sort of question to you, I want you to sort of think about either your experiences that you're having here right now as students, or that you had many years ago as undergraduates. And probably all of you are taking or took some social science courses in sociology or in psychology or in political science. And you hopefully found those courses interesting.
But my guess is that you don't find them hard in the same way that you find your math classes hard or your physics classes hard, or even possibly your biology classes hard. In this sense that social science, that the problems of humans are not difficult in the same way as the problems of science, is sort of encapsulated in this phrase that I'm sure you've all used or heard many times.
So this is an op-ed from the New York Times several years ago written by Bill Frist, the former Republican senator from Tennessee. And he's talking about fixing the health care system, which is maybe a sort of enormous and complex undertaking. But about halfway through this very short op-ed he steps back and he says, look, you know, this is not rocket science.
And I'm always struck when I hear this phrase because we're really good at rocket science.
[LAUGHTER]
You know? And so when NASA sends a probe to sort of hundreds of millions of miles away to go in orbit around a moon of Jupiter, it really goes where it's supposed to go and it does what it's supposed to do, with incredible precision.
By contrast, 100 years after John Wanamaker, the department store magnate, reputedly said, "half the money I spent on advertising is wasted. The trouble is, I just don't know which half," advertising executives still to this day have tremendous difficulty in measuring the effectiveness of the advertising campaigns in which they spend vast amounts of money.
A couple of hundred years after the founding of the republic-- if you read the newspaper you would know that we still argue viciously and without resolution about the role of government in our lives and whether it can solve the problems that we want it to solve.
A few hundred years of political and economic theory-- we still are obviously incapable of anticipating major financial or political crises. After decades and trillions of dollars of money spent on economic development by the West in various parts of the developing world, economists, development specialists are still sort of at loggerheads over whether this money has had any measurable positive impact on the countries that it was designed to help.
Even at the sort of more-prosaic level of predicting the next best-seller or the next hit company. Dedicated, motivated experts with lots of skin in the game have tremendous difficulty figuring out what to put their money on. And after thousands of years of educating our children, we still cannot agree on even the basics of how to improve school performance.
So in light of this sort of vast gap, the gulf between our incredible success as a society in the physical and the engineering in the last hundred years in the medical sciences, and our relative lack of success in the social sciences, it's particularly puzzling to me that in spite of all of this rocket science still seems hard and social science still seems like a matter of common sense.
And what I want to propose to you today is that the resolution to this paradox has to do with the nature of common sense itself. So what is common sense? Well, common sense is one of these funny things that everybody thinks they can define or identify in others, and they usually think that they possess it themselves. But they always seem to think that their version is different from other peoples' versions.
So-- people send me stuff like this all the time.
[LAUGHTER]
So I'm going to go with Dilbert and define common sense as the kind of human intelligence that we rely on to navigate concrete, everyday situations. So I'll be a little more specific. I look around the room and I notice that none of you have shown up in your swimwear today. But I'm guessing when you looked in your wardrobes this morning to decide what to wear to school today you didn't sort of have to grapple with the decision about whether to put on the board shorts or your jeans and t-shirt. Because you just know what to wear to school. It's just a matter of common sense. Right?
Likewise, the way that we behave in different circumstances is highly contextually-dependent. You know, it seems perfectly normal for me to be standing up here and talking to you in this tone of voice, but later on at the reception if I sort of walk up to a group of you and start talking to you like this, it would seem very strange to you. It would seem like I had no sort of knowledge of how to interact socially. It would be like I lacked common sense.
I live in New York, and I'm sure many of you either have lived there or will live there, or at least have visited there many times. And you know it's a big, crowded city, and getting around it involves lots of rules about how to interact with people. And very often we're not even aware that these rules exist until someone violates them.
So if you've ever taken a subway at rush hour in New York you know it can get incredibly crowded, and you've got people jammed up in your face, in your armpits, and it's extremely uncomfortable. But you just sort of put up with it. But if you get on an empty subway car and somebody comes and stands right next to you, it's extremely weird. It's like they just violated some rule that you didn't even know existed until they broke it. You can do the same trick, actually, in elevators. If you just get in and face the crowd of people instead of turning around and facing the door, it's extremely awkward.
Even in sort of more abstract notions of economic versus social transactions, there are all these rules that we follow just as a matter of common sense. So if you want to sort of demonstrate the existence of these rules, try going-- next time your friends invite you over for dinner, try expressing your appreciation by leaving a tip and see how they react. And if that doesn't do it for you, try sex.
So common sense is actually an incredibly nuanced and sophisticated way of-- or a set of rules for navigating everyday complex situations. So what's the problem? The problem is that it's such a useful system for understanding human behavior that we use it to reason about human behavior even in situations that are not concrete, everyday situations.
So the kind of examples that I mentioned earlier, like managing the economy or designing a marketing campaign, planning corporate strategies and so on-- like, these are not concrete, everyday situations. They're all situations that involve large numbers of people who are quite diverse, who exist over large geographical regions. They're interacting with each other in very complicated ways, and their behavior is evolving over, often, long periods of time.
In fact, the social systems that sociologists and other social scientists try to understand are almost the sort of quintessence of complex systems. And everything we know about the dynamics of complex systems tells us that they're extremely difficult to predict and they're extremely difficult to manipulate. And so there's no reason to think that the kind of reasoning that works in concrete, everyday situations should work in these much more large-scale and complex situations. And, in fact, it doesn't.
And sociologists have actually worried about this problem for a long time. So as long ago as the 1940s, the great Columbia sociologist Paul Lazarsfeld wrote this extremely interesting essay in the American Journal of Sociology. And the essay was sort of disguised as a book review. And the book that he was reviewing was called The American Soldier. It was a study that was commissioned by the US Water [? Department ?] in World War II. And they got a bunch of sociologists into the army, and they interviewed about 600,000 soldiers-- an enormous study. Big data, sort of in the old-school way. And this was the first volume of a multi-volume report that described their findings.
And so Lazarsfeld felt gets into the review by summarizing some of the main findings of this report. And number two was, that men from rural backgrounds fared better than men from cities. And Lazarsfeld then steps back from his book review and imagines how a reader might react. And the reader thinks, you know, this is pretty obvious. You know? Men from rural backgrounds, well, they're used to hard physical labor. They're used to sleeping on the ground. They're used to getting up with the sun. You know, why do they need this sort of large and expensive study to tell me what I could have figured out using my own common sense?
And Lazarsfeld says, you know, that's a good point. Except that all the results that I told you were actually the opposite of what the study actually found. In fact, it was men from cities-- not rural men-- who fared better in the army.
Now, Lazarsfeld told his imaginary reader the real results, those two would have seemed obvious for other reasons. Right? Yeah, well, of course men from cities do better in the army. You know, they're used to working in large, vertically-integrated organizations with strict hierarchies and chains of command. They're used to wearing a uniform. They're used to strict hours. Once again, the answer seems obvious.
And then Lazarsfeld gets to his real point, which is that when every conclusion and it's opposite appear equally obvious-- of course, once you know the answer-- there is a problem with the concept of obviousness itself.
So fasr-forwarding 60 years or so, I want to make the claim that Lazarsfeld's observation doesn't apply just to sort of ordinary explanations of individual outcomes like in that survey, but, in fact, to everything. That almost every explanation that you read about things going on in the world suffers from the same problem.
You know, if we think about things that are happening today, whether it's in the global financial system or political or military action in the Middle East, it seems really complicated. You know? There are lots of potentially-relevant factors interacting in many different complicated ways. Lots of people have lots of theories about what could happen. It's extremely hard to make decisions in these environments, to make policies. And you can sort of imagine all kinds of possible outcomes down the road.
But when we look back in history we see a very different picture. You know, if we look back at the last financial crisis, it seems totally obvious that there was this sort of massive housing bubble that was expanding in the US and in other countries.
It seems completely obvious that, you know, this bubble was being driven by shoddy lending practices and conflicts of interest between the banks and the ratings companies and securitization of all of these assets, that all of this was driving us over a cliff. That it was inevitable given all of the practices that we are so apparent to us now, that there was going to be a financial crisis and that was going to lead to an economic crisis. You know, if only we'd listened to Nouriel Roubini. If only we'd listened to Bob Shiller. If only we had paid attention to the things that we now know are relevant, we could have avoided all of this.
And so I want to say a couple of things about the kinds of explanations, the things that seem obvious to us after the fact. The first is that we can always do this. Right? This is Lazarfeld's point. No matter what happens, we can always sort of go and rummage through the dustbin of history and we can pull out the things that now seem relevant, and we can stitch them together into a narrative that leads inevitably to where we know we've ended up.
And the second point is that we can only do this once we know where we've ended up. And so, in a way, the kinds of explanations that we read about in the newspaper are sort of like reading a mystery novel. You're reading along and stuff is happening, and some of these things you know are relevant and some of them are just red herrings, and you don't know which is which.
But you know the author knows. Right? Because the author is sort of very carefully orchestrating everything so that at the end it will all be clear. Right? And the book, the story can only be written once the end is known. And history is very similar to that, that the kinds of explanations that purport to tell us the cause of things are really just stories. They tell us what happened, but they're not actually telling us why.
So just to be a little more specific about that I want to focus on something that we love to try to explain, which is success. Everybody loves success, and everybody loves to try to understand how to be successful. But I want to talk about the Mona Lisa, because this is one of my favorite success stories in history. And ask, why is it the most famous painting in the world?
So let me just stop for a second and ask, how many of you have been to the Louvre in Paris? OK. So how many of you have been to see the Mona Lisa? You can all just keep your hands up. Like, everybody goes to see the Mona Lisa. Right?
So I go to the Louver to go to see all the people who go to see the Mona Lisa.
[LAUGHTER]
And let me ask you another question. So, as you fight your way-- or, when you eventually fight your way through this disgusting mob and you get to the front here, have you experienced sort of a sense of disappointment maybe?
[LAUGHTER]
I mean, you're only human if you think, why this? Why is this the most famous painting in the world? Well, the art critics can answer that question. You know, one theory is that it's the novel painting technique that Leonardo apparently invented, this very sort of filmy, almost semitransparent painting technique that gives it a sort of dreamy finish. Another is that the subject, Lisa del Giocondo, was a mystery for most of the existence of the painting. It was only quite recently that people figured out who the Mona Lisa really was.
Another is that, if you look carefully, there is this sort of weird, fantastical background that was apparently quite novel back then, that people didn't do that in portraits. And, of course, there's also the theory that it was just Leonardo himself, that he was a very famous man, and so anything that he did would be famous as well.
So these are all reasonable theories. But there's some troubling evidence, which is that in the very next room, about 50 yards away from the Mona Lisa is this painting, which is a painting of St. John the Baptist. And you might think that it looks a little bit familiar. It has a similar kind of gauzy finish. It has a similar kind of fantastical background. He's almost got a similar kind of enigmatic smile on his face. And the reason is that, that's Leonardo da Vinci as well.
And so is that, 10 yards to the right, which is a portrait of a woman-- it's called Portrait of a Woman-- also by Leonardo da Vinci. And so any guesses how many people are looking at these paintings?
[LAUGHTER]
Well, there's that one guy.
[LAUGHTER]
Who is staring at the portrait of a woman, and his son is kind of looking at it but not really. So I find this very striking that, you know, all of the things that people point out as reasons why the Mona is so famous are sort of replicated in these paintings here, and yet nobody seems to care about them at all.
So why is the Mona Lisa so famous? Well, you could say, look, it's not any one of these attributes on its own, it's all of them together. And, in fact, this is exactly what the art critic Kenneth Clark says, that they combine in this sort of mysterious way to produce the supreme example of perfection.
Well, it's impossible to refute this argument. Right? Because the Mona is a unique object, and so we can't rerun history and see if it becomes famous again. The problem is that you don't need to refute this argument, because it's vacuous. Right? Really what it says is, the Mona is the most famous painting in the world because it's more like the Mona Lisa than anything else is. True, but not very helpful.
And the interesting thing about this kind of explanation is that, once you start looking for it you see it everywhere. I would say almost-- if not all, almost all explanations of success have exactly this form. They claim to be explaining why something is successful, but in the end all they do is describe it. You know? This is true of Harry Potter. It's true of Facebook. It's true of Donald Trump. It's true of Gangnam style.
You know, you might think that Donald Trump is a sort of bombastic blowhard, but you can't help thinking that he's exactly the right kind of bombastic blowhard. And we know that because, look at how successful he is.
The problem with all these explanations is that all they really say is, x is successful because x is more like x than anything else. They're really just describing the thing that we know. They're not telling us why we know it.
So I don't want to bash stories too much. Stories are incredibly powerful. They help us make sense of the world. They bind us together in communities. They inspire us. They have all kinds of useful functions. Stories are, in fact, the original form of human explanation. And they're really the only one that comes completely naturally to us. So I don't want to rule out stories as a useful device.
The problem is that they're so powerful, they're so convincing to us, that once we have them we're tempted to treat them as causal explanations, that we use them to generalize beyond what we are claiming to describe to make predictions about other things. Right? The reason why we care about trying to explain why Harry Potter is a best-seller or why Facebook is such a successful company is because we want to uncover the general principles of best-sellers or successful companies, because we can apply those things elsewhere.
And if you don't believe me then believe George Santayana who said, "those who do not learn from history are condemned to repeat it." This is an explicit call to take the narratives that we get from history and use them as predictive causal explanations.
Now, going back to where common sense works-- this is not always a bad thing. Right? If you get to repeat similar situations over and over again you can, in fact, sort of learn relatively-- even quite-complex rules of cause and effect. Think about commuting to work. Right? You can-- just the experience of commuting to and from work many, many hundreds of times on different days of the week and different times of day, you can learn what traffic patterns to expect. You don't necessarily ever learn all the things that are going on in a big city, but you can learn enough to make reasonable predictions.
So it's certainly true that the kinds of stories that we tell can be refined through experience to be quite useful to us. But in really complex systems, in all the examples that I gave before-- in marketing, in business, in strategy, in politics and so on-- history never really repeats. And so we're always running into what the sociologist Robert Murton called, the "law of unintended consequences." We're doing our best to infer a cause and effect from the things that we've experienced in the past, but when we try to apply those rules, the next thing, the next battle that we fight is always sort of subtly-different. And so we're always having these unintended consequences.
OK. So if you're still with me you're probably feeling a little depressed or discouraged at this stage. That it is, in fact, discouraging. You know, if it's true that our intuition for how the world works is so misguided, and that the stories that we rely upon to make sense of the world are so misleading, what are we supposed to do?
Well, the answer is that, as in all recovery programs, the first step of the solution is recognizing that you have a problem.
[LAUGHTER]
And so the first step in doing better in dealing with the complex social world that we live in is to recognize that our intuition is intrinsically limited. Whenever we think about why people do what they do or how to make them do something differently, we will spontaneously have intuitions-- lots of them-- and they will seem very persuasive to us. And so you have to force yourself to step back and say, I don't trust them. Even though it seems totally convincing to me, I don't trust my own explanation.
And so what do we do? Well, I think the model that we should look to is nothing other than the scientific method. And so really, when you think about the scientific method, it's really quite simple. It's a recipe for learning about how the world works when you don't have intuition, or when you don't trust your intuition. You know? In science, just as in everyday life, we start off by telling stories, except we call them hypotheses. The thing is, in science we don't trust the stories that we tell ourselves and so we test them.
So really the sort of meat of the scientific method is the testing part of the stories, is the gathering of the data, the designing of experiments. And then, almost invariably, we find that our initial hypotheses are incomplete or even wrong, and so then we have to go back and modify them, and we just rinse and repeat. And in this sort of very sort of simple, iterative process has carried us a long way over the last few hundred years in the natural sciences.
And so what I'm advocating here is that we should import exactly the same model into the social sciences where we do have a lot of intuition. When we do have intuition it seems like, why should we do all this sort of expensive and troublesome and difficult work of doing experiments? But once you realize that your intuition is not as reliable as you think it is, then all of this work seems much more justifiable.
OK. So just to be a little more specific, there are a few of general lessons that I think we can take away that can sort of help you think through some of the issues that come up in applying the scientific method to the social world. And the first is to draw a distinction between the things that are predictable and the things that are not predictable.
So, so far I've sort of emphasized complex social systems. They're very unpredictable. I mean, that's not quite true. There's a lot about human behavior that is either not-random-- or even if it is random, it is predictable in some respects. So if we think about examples like web search-- when you start typing in a query into a search engine, it's sort of almost magical how they can do the autocomplete. Right?
And the only reason they can do that is because there are hundreds of millions of other people who are also typing in queries into search engines, and there are these sort of very stable, empirical regularities that enable them to guess what it is that you're actually looking for. And the same is true in a range of other applications, from flu shots to a lot of content on the web, whether it's optimizing the kind of content that you're shown or recommended to you or search advertising. There are lots of instances where we can do-- never a perfect job of predicting things, but that you can certainly do a lot better than random.
And there are, in fact, a lot of different methods that are available to you once you realize that you need to start making predictions systematically, ranging from statistical models and machine learning up to exploiting the wisdom of crowds in mechanisms like prediction markets. And it's not-- it's not clear which of these methods work the best, and there's sort of an active research program in trying to figure out under what conditions different sorts of predictions work better.
But really the main thing to avoid is relying on a single opinion. And so it's fashionable these days to talk about how experts don't know anything and they're not really good at making predictions. It's not really true, I think, that experts are necessarily worse at making predictions than other people, it's just that when we ask experts we tend to only ask one of them. And that's the real problem. Right?
So the trap to avoid is relying on a single prediction or a single opinion. It seems like an easy thing to do, but it can be difficult, especially when that expert is yourself. OK?
[CHUCKLES]
And the second thing to do is to keep track. This is a very simple recommendation. It's surprisingly difficult to force yourself to do. People love to make predictions, but they're quite reluctant to be held to the predictions that they've made in the past.
So if you imagine every time you sort of feel inclined to make a prediction about something, you should write it down and you should express your confidence interval, and then you should keep track of all these predictions. And over time you will actually learn what you're good at predicting and what you're not good at predicting.
You could learn that some people are much better predictors than others. You could learn that there are different methods that are better or worse at making different kinds of predictions. And you could certainly learn that there are events that are more or less difficult to predict.
And one thing that you would almost certainly discover if you embarked on such an exercise is that some things really aren't predictable in any kind of useful or meaningful way. And unfortunately, these are often the events, the outcomes that we would most like to predict. You know, the next big trend in technology. The next political revolution. The next financial crisis. These are like the black swan events that Nassim Taleb talks about. And probably no matter how hard we try, and no matter how much data we collect, we'll never be able to predict these kinds of events in a useful way.
So what can we do? Well, there's a couple of things that you can do. First is that when you're strategizing or you're planning in a world where there are things that you know you can't predict, you shouldn't base your planning on being able to predict exactly those things. Right? This sounds like a-- why would anybody do that? But, of course, people do that all the time, where they think that they know what's going to happen and they put all their eggs in one basket. And if it turns out that that's not what happens, then they're in trouble.
Now there's a long history of people talking about ways around this problem, going back to the 1970s, called "scenario analysis," where the whole point is to challenge yourself to think about different possible futures, and, in fact, to think about as diverse a range of possible futures as you can. And once you have these sort of different trajectories that the world might take, the idea is to build a plan or a strategy that hedges across these different alternatives. So you look for core tendencies that happen in all of the worlds that you can imagine, and then you invest it in a sort of portfolio approach where you build different strategies or different versions of your strategy that should work better or worse, and the different outcomes that you have imagined.
Now, this is an interesting idea. It's certainly worth pursuing. It's actually quite hard to do. So another approach that is becoming popular now is to sort of forget about predicting altogether, and instead just get very good at measuring and reacting. And so the idea is that you build into your strategy the ability to respond very quickly to whatever is happening in the world right now. So this is something that happens a lot in the world of the web, where we can do A/B testing in real time and we can look and see what's popular or what people are clicking on, and then we can sort of rapidly scale up the successful version.
But one of my favorite examples is actually something from the early 1990s, which is very much pre-web. And it's in a very different industry, of fashion. So the Spanish clothing company Zara pioneered this technique which is almost unique among fashion companies of not pretending that they know which clothing styles and colors are going to be popular next fall. Instead what they do is they send out scouts into malls and public areas, and they look and they see what people are wearing, and they generate all sorts of ideas, and then they place lots of bets. Right?
They come back and they say, look, here's a whole range of possible designs and a whole range of colors, and we don't know which of these is going to be successful. So we'll make all of them but we'll make them in very small batches, and then we'll put them out in our stores and we'll actually run the experiment and we'll see which of these particular designs is selling and which are not. And then we can rapidly scale up the things that are selling and we can get rid of the ones that are not selling.
And their ability to do this depends on having a very, very fast supply chain which is able to manufacture and ship to anywhere in the world a new garment in a couple of weeks. So this is sort of the opposite of trying to predict the future. You just try to predict the present.
So the third general lesson is less about how to plan in a world like this, and more about how to interpret what you observe. And so I want to talk a little bit about success again. And if you ever read any of the business literature you will know that firms that are successful are usually evaluated as having really great strategies, strong leadership, and focused execution. Right? And so if you think about Apple and Steve Jobs, you know, we have this sort of brilliant, visionary leader who's sort of obsessive in detail and quality control. And, you know, that's why Apple is successful.
And then if you look at Yahoo where I used to work, people say Yahoo is not a successful company. That's because they don't have any vision. It's because their leadership is weak. It's because their execution is hopeless.
And so the common sense interpretation of this literature is that it's the process that is driving the outcome. That good processes drive good outcomes, bad processes drive bad outcomes. What could be more obvious than that?
But, as the management scientist Phil Rosenzweig points out in a fascinating book called The Halo Effect, he shows that actually it's really the opposite that is true. That first we observe the outcome. We see whether a company is successful or not. We see whether a person is successful or not. And then we simply impute to whatever they're doing the qualities that common sense tells us should be there.
So if they're successful they must have good leadership. If they're successful they must be smart. If they are unsuccessful they mustn't have worked hard. They mustn't have taken advantage of their opportunities. This is what Rosenzweig calls the "halo effect." And it's an effect that-- a psychological bias where the observable attribute casts a halo effect over the unobservable attribute. So tall people or good-looking people are routinely judged as being more intelligent than shorter or less-good-looking people. Right? Even though there's no known empirical correlation between looks and intelligence, just because we can see one thing and it's good we assume the things that we can't see are good as well. And the same thing applies to success.
Now, this is a real doozy of a psychological bias. Right? This is really a hard one to deal with. You can recognize that it exists, but it's very hard to know what to do about it. Because if you can't evaluate a process, if you can evaluate talent based on the outcome, what are you supposed to do?
Well, here I think we can look to sports, and in particular a sport like baseball is great for evaluating performance. It's great for evaluating talent because in any given season a batter will have hundreds of at-bats, and over the course of a career many thousands. And each one of these at-bats is an independent test of skill. And so if over the course of hundreds or thousands of independent trials somebody performs better than somebody else, you can say, quite reliably, that person is a better performer.
The problem is that outside of sports, and particularly in many of the examples that I've talked about today, like in corporate strategy or in politics or in policy, you really just get one attempt. And in just one trial it's very, very hard, maybe even impossible, to differentiate a good or a bad strategy from good or bad luck. It's entirely possible that a great strategy can fail and that a lousy strategy can succeed. And so there's really very little that you can infer from a single observation.
So what are you supposed to do? Well, the easy answer is that, if you can you should try to evaluate people or companies or procedures or whatever it is that you're looking at over many independent observations. And it's important that they be independent observations, because we also know that the early successes can lead to better opportunities, and those better opportunities make later successes more likely. And so it's difficult in life to ensure that things are independent. But, if possible, if you have many independent observations you're in good shape.
Now, in the likely situation that you only have one such observation, I think that really the way to think about this is to simply resist the temptation to only pay attention to the outcome. And we really have a tendency to do this. We're impressed by fancy titles. We're impressed by success. We're impressed by wealth.
But I think over the course-- certainly many of us have experienced this and many of you will experience this, where you meet someone who has all of these things and it's a little bit like seeing the Mona Lisa. You think, why is this person so successful? And rather than just sort of shrugging your shoulders and assuming that somebody else has done the hard work of evaluating them and that all of this is deserved, I would encourage you to hold onto your skepticism and at least to ask for evidence. Right? You are not always in a position-- you don't always have the expertise to make that kind of evaluation yourself. But you can at least ask the question.
And likewise, you'll also meet people who are not successful, but they're extremely smart. They're extremely hardworking. They're extremely proficient at whatever they're doing. And you also wonder sometimes, why is this person not more successful? And the answer is, maybe they should be. Right? Maybe they really deserve that. And so I think that the response to the old adage, if you're so smart why aren't you rich, is not just that there's other kinds of things that people care about other than being rich. It's that success is success and talent is talent, and they're not the same thing. And they're not even necessarily highly-correlated.
So just sort of finishing up with a quick look to the future. You know, sociology has been around for about 100 years now. And for most of this time sociologists have been aware of much of what I'm talking about.
They've long been suspicious of common sense, and they have long understood that the kinds of problems that they are trying to understand are, at some level, dependent on the interactions between large numbers of people. And the problem that they've always had, and one of the reasons why I think sociology has had difficulty as a science, is that it's been, for most of this history, impossible to observe these interactions at any scale. If you think about trying to understand how our society evolves or how public opinion evolves, you're thinking about the interactions and the behavior of hundreds of millions of people, potentially, over long periods of time. Just observationally that was way too heavy a lift.
And what's so exciting about what's happening now in the last-- sort of the technological revolution that we've gone through over the last 15 years or so-- is that the internet is slowly starting to lift this veil, that it is slowly beginning to give us the ability to actually measure the interactions and the actions of hundreds of millions of people, sometimes in real-time.
Michael mentioned that Steve and I worked on the small world problem back in the late 1990s. What he didn't say is that, when we first started thinking about the problem we thought, well, you know, the obvious way to solve this problem is just to map out the social network of everybody in the world and count how many links from one person to another. Simple. And then we thought, well, that's impossible. And, in fact, not only is it impossible, it's inconceivable. Like, there's no way that will ever happen. So instead we developed this elaborate work-around of trying to estimate what the answer should be. So it's just incredible to me that less than 15 years later this impossible, you know, inconceivable thing exists. And it's called Facebook.
And a couple of years ago the data science team at Facebook did the totally obvious thing that we thought about back in 1995, where they just constructed the network of everybody in a good chunk of the world, and they counted how many links from one person to another. And sure enough, they found that everybody is connected by about six links. So, you know, that was gratifying for us in terms of our little toy theory from many years ago, but even more-so it was just revelatory in terms of what is now possible. And it's led some of us to sort of speculate that perhaps the internet will be to sociology what the telescope was to physics. This device that makes the previously-invisible visible, and in so doing drives all kinds of new scientific discoveries.
And without going into any details, there are so many examples now in this field that's emerging called "computational social science" where we're using the existing platforms that were, in many cases, developed for other purposes to do very interesting and novel social science. Not just Facebook, but using Twitter to measure influence and attention and even mood, some work that Michael has done.
We can do very large-scale lab-style experiments using Amazon's Mechanical Turk. We can mine search logs to make predictions about movie box office revenues or flu trends. We can map out entire organizations with email logs. We can estimate peoples' preferences in romantic relationships, including their preferences for their racial preferences using data from dating sites. We can run very large experiments on Facebook, not without some controversy. We can even, these days, track the diffusion of information through multiple platforms by using URL tracking.
So there's just an enormous amount of activity that's going on. I'm sure many of you or some of you here in the room are part of this. It's a very exciting time to be doing social science.
I think that it's important to remember that the outcome of all of this is not going to be Newton's laws of motion. Right? We're not going to have something like social physics. As much as some people would like that to be the case, the world, the social world is just too messy and too heterogeneous, I think, for us to ever have a social science like physics.
But that's OK. You know, I mean, the point of science is to solve problems, not to look like physics. And I think that what I am optimistic about is that this revolution that we're going through right now has already revolutionized social science and will continue to do so. And with luck, over the next sort of near-future, the next generation or so, we'll see it have an impact in the world of business and maybe even policy.
So with that, thank you very much. And I hope you have some time for questions.
[APPLAUSE]
Yes.
AUDIENCE: Thanks for the great [? talk. ?] [INAUDIBLE] There are still some examples in the past that [INAUDIBLE] look at them and see how they were successful. For example, soon after the French revolution, the French Republic and Napoleon were very successful in their military campaign [INAUDIBLE] when he came up with all the policies that transformed Germany from [? feudal ?] community to a huge empire. Or Brits-- what they did over 19th century with [INAUDIBLE] prosperity. So when I look at these examples specifically, it's possible to predict or come up with [? bright ?] policies to be successful in the short-term, or [? was it just ?] [? one? ?] Or, what do you think?
DUNCAN WATTS: Well, I'm sure that there are many people who have lots of reactions to the notion that the British Empire was sort of a good thing for the world.
[LAUGHTER]
I know, personally, that the founding of Australia was one of the worst ideas that anybody ever had. I don't know-- there's a great book called The Fatal Shore, which is by the Australian art critic-- actually, he was an art critic, but he became an historian-- Robert Hughes, about the founding of Australia. And it is just completely inexplicable how badly-planned this expedition was. You know? They were basically sending 1,000 unskilled petty criminals to the moon, effectively. Only, at least you could see the moon from England. But you couldn't-- it was about a 12-month round trip to Australia back then. And it was just-- I mean, they should have all died. It should have been a total disaster. Instead, it worked out great. You know? We have this lovely country called Australia.
[CHUCKLES]
And so you think, oh, that was a great success. But it's hard to sort of see how you could sort of think positively about the planning that went into it. So I don't know the answer to your question. Of course, it's true that some things work. But I don't think that we have anything like a sort of scientific basis for saying, you know, that they were predetermined to work. So I-- Yeah. I think I would go with luck, actually.
[CHUCKLES]
Yeah?
AUDIENCE: So you talked about the difficulty of identifying causes. And one of the things I noticed about the things [? they ?] [? must have ?] thought were very hard to predict, but they all seem to involve feedback loops, which kind of have the effect of being amplifying noise. [? Don't you ?] [? think? ?]
DUNCAN WATTS: I think all complex social systems involve feedback loops. Right? I mean, you think about any sort of economic, political system, you know, you have self-fulfilling prophesies. You have perception driving reality. You have people reacting to what other people are doing, but those people are, in turn, reacting to what they think you were going to do. So there's sort of strategic behavior as well.
So I think that there are sort of feedback loops rife in all of these systems. Yeah. Absolutely. And I think that's part of what makes them so unpredictable. Yeah.
AUDIENCE: It seems like you're advice here is to try to damp out the feedback loop.
DUNCAN WATTS: Well, I don't know that there's so much that one can do about the feedback loops. Right? I think what I'm arguing for is maybe a little more modesty. Like, what you might call sort of epistemic modesty. Modesty about what we can know. Right? We really crave answers. Right? We want to know, like, what caused the financial crisis? Why was Harry Potter the most sort of successful fiction book of all time? Why did Christianity succeed? Right?
Like, I don't think we can know the answers to these questions. Right? I don't think that they are answerable. Right? But that doesn't stop us from wanting answers, and so we come up with them. And in so doing we convince ourselves that we know things that we don't really know. And that gives us the confidence to go and make plans that we have no reason to believe we can make. Right?
So you might say, well, what else are you going to do? Right? And, you know, it's nice to be a scientist. You don't have to sort of make these kinds of decisions. I wouldn't like to be the President of the United States right now. That would be a terrible situation to be in, where you kind of have to do something, but you probably have no idea what effect your decisions are going to have.
So I don't think that-- there are certainly situations where you really have to act, you have to make decisions, and you have to do so in a state of incomplete knowledge, and without the kind of certainty that you would like. And that is all inevitable, but I think that what I would argue for is just a clearer sense of what it is that you're doing while you're doing it.
And that's a hard thing to justify. I mean, I think if you think about Obama's "don't do stupid stuff" strategy, I would say that's very consistent with what I'm talking about. Right? That you're trying to avoid sort of over-acting because you're aware of the law of unintended consequences. But even that has unintended consequences. Right? Because then everybody gets mad at you for not acting like the kind of president that you get in the movies, where you sort of get up and you make some strong statement and you go and sort of go beat up the bad guys.
But, of course, we know that when you go to beat up the bad guys, other bad things happen too. So it's really-- I'm not sure that anything that I have really sort of worked on over the years points the way to solutions to these kinds of problems. It's more just a way of thinking about the intractability of these kinds of problems. And it might not seem like a great, satisfying solution, but I think, again, just being aware of how the world works is always a good place to start.
Yes?
AUDIENCE: Could you say a bit more about the connection between common sense and obviousness? The examples you gave for common sense might be mistaken for shared social values. I didn't wear a swimsuit because I have a shared social value about how everybody dresses and expects me to dress. And obviousness you talked about in terms of a [? posteriority ?] of rationalization, but the two things seem really quite independent. Yet it seems that you would like me to believe there's a connection from one to the other.
DUNCAN WATTS: The connection-- so, right. So common sense. Well, common sense is actually kind of a poorly-defined concept. It means at least two things. Right? One is, what is self-evident to any sort of sentient being. Right?
So, everybody knows that chairs don't talk. Right? That is common sense. Right? I'm pretty sure that not only do we all agree on that, but that there will never be a counterexample. Right? So there are certain things that are common sense in that sense. Right?
But the other meaning of common sense is, what is knowledge or beliefs that are commonly shared. Right? And a lot of that is highly sort of historically and culturally and socially-specific. Right? And the fallacy is that we treat these two things as the same, that we talk about things that we just happen to agree upon as self-evidently-true. Right?
So, yes, there's nothing wrong with everybody agreeing to not wear swimwear to the office. Right? That's sort of a reasonable kind of coordination exercise. But then it gets sort of reified into this-- you know, that there's sort of an external order that says that it's bad to wear swimsuits to the office, that says women aren't good at math, that says, you know, whatever crazy thing people have believed as self-evident over history. Right?
So it's this sort of commingling of things that are truly self-evident with things that just we happen to think are true that causes a lot of the misunderstanding. And it's very hard to resolve disagreements about what is common sense. Right? Because if I think that what I believe is self-evidently true to any reasonable person and you believe something different, you must be crazy. Right? That's the only explanation that I have. And you think the same thing about me. So there's no room for us-- there's no room for us to have a rational argument about who is right and who is wrong. Because self-evident truths cannot be argued about. Right? So that's sort of the common sense side of things. Right?
Now, the-- where it connects to the "everything is obvious" part, is that because we have such strong intuition that comes from our common sense reasoning, whenever we see an outcome, we can create this seemingly-self-evident causal story. Right? So once we know the answer it's very-- you know, if we see a cat walking along a wall and a nearby tree branch starts waving, we don't think the cat caused the tree branch to move around. Right? Because we have good sort of theories of the physical world that say, no. That can't happen. Right? There has to be some other explanation.
But in the social world we can connect almost anything with anything else. Right? We can always tell ourselves a story that seems very plausible to us. And because we're so convinced of our explanations, we just go ahead and believe that. Right? And then you come up with a different story and then I just say, no, no, you're wrong and I'm right. And then we just argue about it. Then whoever's the better arguer wins. Right? And, you know, that's sort of how we resolve things in the social world.
And certainly in the business world-- I've certainly been in strategy meetings at large companies, and it's amazing to me that everyone just sort of-- it's clearly impossible that anybody actually knows what's going to happen. Right? But that doesn't stop people from being extremely confident about what they think is going to happen. And so we just have this sort of argument. And then somebody wins, and that's what we do. Right?
And if they-- like the example over here, if it works out well then you claim to be a genius. And if it doesn't work out well, then you blame somebody else. Right?
[LAUGHTER]
So I do think there is a very-- it's not that we couldn't do the "everything is obvious" thing in other areas of knowledge. Right? We couldn't say, oh, in biology or in physics of course it's like that because now I know the answer. But there we have much better methods for actually-- for rejecting false causes. We have better theories about what should happen. Whereas in the social world our intuition has such free reign that the problem is much worse, I would say. So it's not that you can't do this elsewhere, but it is much more prevalent where common sense is so dominant. Does that answer the question? Great question.
Yes?
AUDIENCE: I'm a little concerned that your idea that the internet will open [INAUDIBLE] as well as [? subject ?] of uncertainty principle, that by having all of this information available we're destroying some of the independence that you're trying to get with some of your physical science interpretations of things. I'm sure in the 18th century there were 10 most-popular paintings in the world in different communities, but as the world gets more connected it solidified the one. And it's going to be very hard, I think, to test it [INAUDIBLE] when everybody knows about what everybody else is doing.
DUNCAN WATTS: Yeah. No, it's-- you're absolutely right that modern technology is not just providing us with better ways of measuring things or observing things that we couldn't otherwise observe. It's also changing the nature of the things themselves. Right? And so there's sort of an uneasy tension between these two things that are both happening.
And you could-- many people of course study the internet precisely because they want to understand the impact that it's having on society. You know? How is it changing our notions of privacy? How is it changing our notions of intellectual property? How is it changing how we communicate? Or even altering-- and certainly technology has changed the dynamics of cultural markets, like you suggested. I mean, I think Bob Frank likes to point out that many years ago every town had an opera house, and so every town had a tenor, so there were jobs for thousands of tenors. But now we just have three of them. Right? Because everyone can listen-- through perfect reproductions-- can listen to the three best.
So absolutely technology is changing the thing that we're also trying to study with technology. And that's one of the reasons why it's not going to be like physics. Right? Is that atoms don't care that-- Heisenberg's uncertainty principle sort of aside-- atoms don't care that you're looking at them. Right?
[CHUCKLES]
You know, ants are not bothered by the fact that they're understood by somebody else. Right? They don't get upset about it and then start behaving differently. But people do. You know, when you think about advertising, it's sort of-- in a way it's kind of an applied-- you know, it's applied psychology. Right? Applied mass psychology.
And so there's this sort of arms race between advertisers who are trying to trick people into wanting things that they might not otherwise want, and people sort of getting wise to the advertisers. So all of that is true. I don't think that there's a sort of end goal, there there's a point where we're going to have figured it out. Right?
What I do hope is true is that we will just get better. Right? That we will-- and I think that once you start looking at this data and once you start doing experiments, and once you start testing your hypotheses, at the very least you realize how lousy your intuition is. Right?
And, you know, this is a recurring experience for us, where every time we think we know what's going on we go and do something new and we discover that it's not quite what we expected. And so I think that even if it seems unlikely that we're going to sort of turn sociology into an engineering-style science where we can sort of manipulate networks and make some outcome magically happen.
And I'm not even sure that-- well, actually, I'm pretty sure that we wouldn't want that to happen. Right? That sounds like social engineering, which has all sorts of scary connotations. But I do think that we can nevertheless make a lot of progress in understanding sort of the fundamental forces that drive social outcomes. And that's useful in and of itself.
Everybody ready for some wine and cheese? Thank you.
[APPLAUSE]
Common sense is always a good thing, isn't it? Maybe not. Although common sense is extremely useful for dealing with everyday problem solving, when applied to the kind of large-scale problems that arise in government, business, policy, and marketing, it can suffer from systematic failures.
Duncan Watts, PhD '97, principal researcher at Microsoft Research and A.D. White Professor-at-Large (2013-2019), gave a public lecture at Cornell Sept. 9, 2014. Watts is considered to be among the vanguard in the area of network theory.