What’s Wrong With Common Sense

A “Techwise Conversation” with social network theorist Duncan Watts

Loading the podcast player...

Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”

There’s a joke told among mathematicians. I heard it from the Polish logician Andrzej Mostowski, about the University of Texas topologist Robert Lee Moore, although I’m sure he’s not the only one it’s ever been about.

At the blackboard one day, Moore got to a certain point with a theorem and says, “the rest of the proof is obvious,” and then starts to move on to the next proof. One student, the best student in the class, raises his hand and says, “Professor Moore, I don’t see that the rest of the proof is obvious.”

Moore looks at the student, looks back at the blackboard, and says, “Well, as you can see—. Well. As you can—. Hmm. Ahem. Excuse me.” And then he goes back to his office. He’s pulling books down from the shelves...10 minutes go by, 15…. Finally, he comes back to the classroom, and says, “Yes. As I thought. The rest of the proof is obvious.” And then he goes on to the next theorem.

The concept of obviousness is a very slippery, yet important one. No one knows that better than my guest today. He’s the author of the provocatively titled book, Everything is Obvious: Once You Know The Answer, published last year by Crown Business.

Duncan Watts started out the dot-com era as a mild-mannered sociologist at Columbia University, here in New York, but then went on to do groundbreaking research at Yahoo Labs, because that’s where the data was. In May, Microsoft raided Yahoo Labs to staff up its new New York research center, and Watts was a prime catch.

He’s also the author of another notable book, Six Degrees: The Science of a Connected Age, published in 2003. He joins us by phone.

Steven Cherry: Duncan, welcome to the podcast.

Duncan Watts: Thanks, Steven. It’s great to be here.

Steven Cherry: You take up some interesting questions in the book. For example, why is the Mona Lisa the most famous painting in the world?

Duncan Watts: Well, it’s a great question. It’s one that I spend a fair bit of time talking about in the book. It’s—it clearly is the most famous painting in the world. If you’ve ever been to the Louvre, and I assume that many of your listeners have, you probably have stood in front of the Mona Lisa at some point and sort of wondered to yourself why is this the most famous painting in the world. Because when you get there, it sort of seems somewhat disappointing. Now if you—if you listen to the—the experts, the—the art critics, they will tell you that there are sort of all sorts of attributes that might not be immediately obvious to a naive viewer that explain why the Mona Lisa is so special. And they’ll talk about the—sort of innovative painting technique that da Vinci invented to achieve that sort of dreamy kind of finish, the—the fantastical background behind the subject, which was quite unusual back in those days, the mysterious nature of the subject herself. We now know it’s Lisa del Giocondo, but that was not known for many years. The—of course, the famous enigmatic smile, the identity of the artist himself, the fact that he was also famous.

But what is interesting is that when you wrap all these things together and you say the Mona Lisa is famous because it has all of these features, really all you’re doing is saying the Mona Lisa is the most famous painting in the world because it’s more like the Mona Lisa than anything else is.

And this sort of vacuous-sounding statement actually turns out to be rather typical of the kinds of explanations that we give, particularly when we’re trying to explain success. We often see that something is successful and we ask why is it successful. And then when we give what we think is an explanation, it turns out it’s really just a description of the thing itself. And this is true of other kinds of famous entities like Harry Potter or Facebook or Donald Trump or, you know, many other examples that we try to explain.

So one of the themes of this book is that—is that the explanations that we give are not actually explanations at all but really just in the sense of explaining cause and effect, but really just descriptions of what we know has already happened.

Steven Cherry: And I guess a hallmark of those stories is that the explanation is not really generalizable. It doesn’t tell us how to make a successful social network or how to make a really famous painting.

Well, that’s one consequence of it. So it’s absolutely true that—that when you—all you’re really doing is simply describing an outcome, the story that you’re telling, which seems like a causal story, doesn’t generalize to anything else. And so in that sense it’s not very helpful. But I would say something even stronger, that it doesn’t even help you understand the thing itself.

And so what do I mean by that? Well, there’s this wonderful concept called the halo effect that originally comes from psychology where people—it’s known that certain observable attributes of an individual, like your height or your looks, exert kind of a halo effect over your other attributes. A few years ago this management scientist Phil Rosenzweig wrote a great book called The Halo Effect, where he talks about how this very same effect shows up in evaluating companies as well. And so it’s often the case that companies—or it’s almost always the case that successful companies are evaluated as having great leadership and visionary strategy and really focused execution, and unsuccessful companies are evaluated as having the opposite of those things. And the commonsense view is that the process leads to the outcome, that good processes lead to good outcomes and bad processes lead to bad outcomes.

And what Rosenzweig argues quite effectively, I think, is that really it’s the other way around. So first we see that something—a company is successful and then we decide that its strategy must have been visionary even if we probably wouldn’t have guessed that to begin with.

So these explanations that we—that we provide that make us feel like we understand the world are really just stories that—that—that—that are making the world seem more sensible than it really is.

Steven Cherry: A couple of weeks ago I quoted on the show the famous maxim that it’s hard to make predictions, especially about the future.

Duncan Watts: I think it was Poincaré who said that.

Steven Cherry: There’s like a dozen people who are credited with it. There’s actually a web page that lists all of them.

Duncan Watts: Oh, really?

Steven Cherry: Yeah, like more than a dozen, including Yogi Berra. So Poincaré and Yogi Berra.

Duncan Watts: Right. I think Poincaré got there before Yogi Berra did.

Steven Cherry: Probably. But could he hit a curve ball, you know? So I mean we’re kind of obsessed with prediction, though, right? I mean the 2012 presidential race, which we did a podcast with a colleague of yours. Whether to buy life insurance, what the next big musical hit will be. Your view is that most predictions are easy, just not the ones we care about.

Duncan Watts: Well, I’ll say a couple of things about prediction and explanation. It’s not necessarily bad that the stories that we tell—well, it’s certainly not all bad that the stories that we tell are not predictive. it’s very important that we feel like we understand the world. I mean if we—if we—if we didn’t, then it would probably be very hard for us to get out of bed in the morning and—and actually do anything.

The problem is that we often convince ourselves that we understand—when we convince ourselves that we understand more about the world than we really do, we—we go on to make other kinds of mistakes.

So, you know, in the book I try to sort of make a—a distinction between the sorts of predictions that we can make and the sorts of predictions that we can’t make. And, you know, to be—oversimplify just a little bit, the kinds of predictions that we can make are predictions where we have a lot of data about similar kinds of things in the past and where historical trends generalize well into the future.

And so, you know, examples of this sort of thing come up a lot in the online world, where we have an enormous amount of data about human behavior, whether it’s search behavior or browsing behavior. And this is why when you type in, you know, start typing a query into a search browser, the browser can automatically complete your query. And it’s sort of uncanny how many times it gets it right because—and the reason is that there are millions of other people who have been typing in search queries before you. And, you know, many—very frequently when they typed in the first two words that you’re typing in, you know, they’re looking for the same thing. So there’s some sort of predictability there in human behavior.

We want to predict—we want to know exactly what’s going to happen, you know. So when—when I tell you that there’s a 60 percent chance of rain tomorrow, what that really means is that, you know, on six out of ten days like tomorrow, it will rain. And when the weather forecast says that, you know, it’s actually pretty accurate. They’re pretty reliable. But the thing is like you don’t really care about what’s going to happen six out of ten days; you want to know what’s going to happen tomorrow, right?

Just like when I say there’s a 60 percent or a 52 percent chance of Obama winning the presidential election, you know, it’s sort of—it’s—it quickly gets into the realm of sort of philosophy what that actually means because there’s not going to be a hundred such elections of which Obama will win 52, right? There’s only going to be one and really all you care about is who’s going to win it. The other problem is that many of the things that we really care about like, you know, the next, you know, blockbuster company or the next hip product or the next financial crisis or the next political revolution are events that do not conform to historical patterns. And so even if we were trying to make probabilistic type predictions about these sorts of events, we wouldn’t do a very good job precisely because they’re not consistent with historical patterns. You know, the whole we look back at the—at the failure of—of—of a number of—of mathematical models in the last financial crisis, part—or a big part of the problem was that the models had been trained over a period of time, you know, a couple of decades during which housing prices had only ever gone up. And so they simply weren’t prepared or weren’t trained to predict a massive drop in housing prices because, you know, they only looked at—all the historical data they had had not shown that sort of event. And so they didn’t place any weight on that kind of event.

Steven Cherry: So, yeah. You looked, for example, into the question of like big musical hits. What did you find?

Duncan Watts: So one of the—the experiments that we have done over the years is looking at how social influence affects outcomes. So when we think about, you know, why do people like a certain song or a certain band, for example, in the music realm, we tend to focus on, again, the—the attributes of the band. And we think, well, you know, I personally like AC/DC because I like that style of, you know, Australian hard rock and so on.

And the—all of the social consumption and social influence has an interesting consequence for—for cultural markets, which is they experience this kind of—this phenomenon that we call cumulative advantage or otherwise known as the Matthew effect, which is a phrase that comes from the Book of Matthew in the Bible, meaning this is where Matthew says, you know, the rich shall get richer and the poor shall get poorer.

The original paper on self-fulfilling prophecies was written by the sociologist Robert Merton, who also coined this phrase the Matthew effect. And the example that he used to motivate this whole phenomenon of self-fulfilling prophecies was the example of a bank run. That you can have a bank that is in some sort of uncertainty, right, but is actually sound, but maybe it’s close to being insolvent. And rumors start to swirl and so people get worried, and they go and they take their money out of the bank. And because banks always have less in their reserves than the sum of their deposits because they’re—you know, the whole point of banks is the invest their money in long-term illiquid assets, they actually don’t have enough money to reimburse all their depositors. And so the bank can actually go out of business just because people think it’s going out of business.

And you can sort of flip from one equilibrium to the other on a dime. And so it really is very unsettling how much of this sort of seemingly objective world of—of finance turns out to be sort of driven by social influence and these sort of dynamics of—of belief that we see in experiments like this.

Steven Cherry: So I guess this is sort of getting close to the heart of your critique of common sense. And in the book you say, “Often common sense turns out to suffer from a number of errors that systematically mislead us.” And I don’t know about other people, but I don’t mind being wrong, but systematically wrong sounds bad. What’s wrong with common sense?

Duncan Watts: Well, common sense is a sort of wonderful form of human intelligence that is highly adapted to or ideally adapted to solving concrete everyday situations. So when we use common sense appropriately, it’s to sort of think about, you know, what should I wear when I go to work in the morning versus when I go to the beach, right? This is typically not something you really have to think about because it’s just common sense, right? But it’d be very easy to screw up. You could, in fact, you know, if somebody showed up in the elevator in their bikini one day, you would think that was sort of ridiculous. But it seems perfectly normal when you’re on the beach. And you—you sort of—it doesn’t really occur to you that that’s even a decision you have to make because it’s just common sense.

The problem is that it’s so good at helping us to solve everyday problems and everyday kind of interactions with our fellow human beings that we sort of treat it as a universal system of reasoning. We’re tempted to use it to reason about human behavior in situations that are not everyday, concrete situations.

So if you think about, you know, corporate strategy or economic planning or development planning or even designing a marketing campaign or making a prediction about the next election, these are not concrete, everyday situations. They’re situations that involve large numbers of people, sometimes millions or hundreds of millions of people who are all very different from each other, who are all spread out over time and space and who are all interacting with each other in often very complicated ways.

And so they have—these kinds of systems have all the hallmarks of what we call complex adaptive systems. And everything we know about complex systems tells us that they are very hard to predict. They’re very hard to manage. They’re very hard to manipulate in any kind of deterministic way. And yet we are tempted to use our common sense to think about how they behave anyway. And, you know, from a scientific point of view, that’s crazy. There’s no reason to think that your intuition about how people behave should be able to help you predict how millions of people are going to behave in these sort of complex situations. But our brain, our intuition, our commonsense reasoning plays these tricks on us where it helps us to sort of effectively reduce a very complex situation to a simple situation and then we make a decision about that.

So, you know, recently there’ve been all these arguments about the debt crisis and, you know, how the government should—the U.S. government should respond to this sort of, you know, long-term debt problem. And it’s amazing how frequently politicians and other people will invoke analogies about a family or a single person.

But this is sort of a ridiculous analogy because there’s so many differences between the debt of an economy and a debt of an individual, you know, in particular that much of the debt in a huge economy like the U.S. economy is money—is money that’s owed between parties in the same economy. So it’s sort of a completely broken analogy, and yet it’s really tempting for us to use this analogy. And it’s really tempting for us to reason about government policy based on our own sort of experience of everyday situations. And this is the sort of mistake that I’m warning against.

This is the first of a two-part interview with Microsoft researcher Duncan Watts, author of the book Everything is Obvious: Once You Know The Answer.

For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.

This interview was recorded 31 July 2012.
Audio engineer: Francesco Ferorelli
Read more “Techwise Conversations” or follow us on Twitter.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

Advertisement
Advertisement