Episode 266: Prof. Cass Sunstein: Practical Reason in Ordinary Life
Cass R. Sunstein is currently the Robert Walmsley University Professor at Harvard. He is the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. In 2018, he received the Holberg Prize from the government of Norway, sometimes described as the equivalent of the Nobel Prize for law and the humanities. In 2020, the World Health Organization appointed him as Chair of its technical advisory group on Behavioural Insights and Sciences for Health.
From 2009 to 2012, he was Administrator of the White House Office of Information and Regulatory Affairs, and after that, he served on the President’s Review Board on Intelligence and Communications Technologies and on the Pentagon’s Defense Innovation Board. Mr. Sunstein has testified before congressional committees on many subjects, and he has advised officials at the United Nations, the European Commission, the World Bank, and many nations on issues of law and public policy. He serves as an adviser to the Behavioural Insights Team in the United Kingdom.
Mr. Sunstein is author of hundreds of articles and dozens of books, including Nudge: Improving Decisions about Health, Wealth, and Happiness (with Richard H. Thaler, 2008), Simpler: The Future of Government (2013), The Ethics of Influence (2015), #Republic (2017), Impeachment: A Citizen’s Guide (2017), The Cost-Benefit Revolution (2018), On Freedom (2019), Conformity (2019), How Change Happens (2019), and Too Much Information (2020). He is now working on a variety of projects involving the regulatory state, “sludge” (defined to include paperwork and similar burdens), fake news, and freedom of speech.
We make countless decisions throughout our lives that range from the mundane to the monumental. But how do you decide how you decide? That is the fundamental question in our esteemed guest, Cass R. Sunstein’s new book Decisions about Decisions: Practical Reason in Ordinary Life. Cass currently serves as the Robert Walmsley University Professor at Harvard University and is the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. He is also a prolific author, with one of his most notable works being the hugely popular and impactful book, Nudge: Improving Decisions about Health, Wealth, and Happiness, which he co-wrote with Richard Thaler in 2008. In today's conversation, we sit down with Prof. Sunstein to discuss the difficulties inherent to understanding why people make the decisions they make and what the latest research teaches us about how we should approach decision-making to maximize our well-being. Cass provides insight into second-order thinking strategies, the difference between picking and choosing, and why delegating a particular decision is sometimes the right call. We also unpack what to consider when making major life choices, the strengths and weaknesses of algorithms when it comes to decision-making, and much more. To hear Cass’s many insights on the topic of behaviour, knowledge, and decision-making, be sure to tune in!
Key Points From This Episode:
The challenges of understanding why people make the decisions that they make. (0:03:38)
Second-order decisions and why they are sometimes preferable to on-the-spot decisions. (0:04:50)
An overview of various second-order decision strategies. (0:06:45)
Guidelines to help you choose which decisions to delegate and how to determine whether you have a trustworthy delegate. (0:11:28)
What to consider when making a transformative and irrevocable life decision. (0:16:07)
Why people avoid seeking out information that might make them feel bad, even if it could help them make better decisions. (0:21:29)
How people decide what information to believe and when to update their beliefs. (0:28:01)
Asymmetries in how we update our beliefs and factors that can deter people from updating their beliefs when faced with new evidence. (0:32:28)
How joint evaluation and separate evaluation influences your decision making and which one you should use depending on the context. (0:43:12)
Insights on well-being and what to value when you’re making everyday decisions. (0:48:14)
The strengths and weaknesses of algorithms when it comes to making decisions and what we gain when we make decisions ourselves. (0:52:38)
Examples of when using algorithms can be harmful or dangerous. (0:59:25)
How our decisions can be manipulated and the importance of doing due diligence. (01:01:30)
Cass’s well-known work on nudges and how nudges differ from manipulation. (01:02:59)
Happiness, meaning, variety, and how Cass defines success in life. (1:05:50)
Read The Transcript:
Ben Felix: This is the Rational Reminder Podcast, a weekly reality check on sensible investing and financial decision-making from two Canadians. We're hosted by me, Benjamin Felix and Cameron Passmore, portfolio managers at PWL Capital.
Cameron Passmore: Welcome to Episode 266. We are recording this on the eve five years ago, Ben, tomorrow, our first episode dropped, and I can't believe it. I cannot believe the guest and the conversation we just had given our decision to launch this podcast five years ago, and it's about decision-making. To talk about decision-making, we had an unbelievable conversation with Professor Cass Sunstein, I'm sure that's a name that many listeners will recognize. A little over a month ago, his most recent book, Decisions About Decisions: Practical Reason in Ordinary Life was released. It dropped automatically. I pre-purchased it on my Kindle. Sunday morning, I dove into this book, and I couldn't stop reading. It was fantastic. So I sent an email to Professor Cass Sunstein, and within 24 hours, he responded saying, "Yeah, I love to join you guys in your podcast." Five years, Ben, can you believe it? We've gone from an idea to a conversation about decision-making with Cass.
Ben Felix: With one of the best people in the world that you can have a conversation about decision-making with. Pretty cool.
Cameron Passmore: It's beyond cool. I mean, it was an unbelievable conversation.
Ben Felix: Like you, I read his book and it was excellent. It's a heavy, it's decision theory. It's not a light read. But I was going through the book and thinking about as we thought about what questions we wanted to ask Professor Sunstein. I was just thinking like, this book is great. But if Cass is able to come on our podcast and explain the book in a way that is relatable and easy to understand, relative to the density of the book, this is going to be an absolutely incredible episode. Whatever expectations I had about how good he may be at doing that, he completely blew them out of the water. His ability to answer our questions on pretty complex decision theory topics by citing evidence, but with a combination of stories and analogies, it was just absolutely incredible.
Cameron Passmore: Cass is currently the Robert Walmsley University Professor at Harvard University. He is also the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. In 2018, he received the Holberg Prize from the government of Norway, which is sometimes described as the equivalent of the Nobel Prize for Law and the Humanities.
Ben Felix: Yes. In 2020, the World Health Organization appointed him as Chair of its technical advisory group on Behavioural Insights and Sciences for Health. This is pretty cool, from 2009 to 2012, he was Administrator of the White House Office of Information and Regulatory Affairs, and then after that, he served on the President’s Review Board on Intelligence and Communications Technologies, and on the Pentagon’s Defense Innovation Board.
Then, he's authored many, many books. One of them, we talked about today, along with the few of his related papers. He also wrote the very popular and impactful book, Nudge: Improving Decisions about Health, Wealth, and Happiness, which he co-authored with Richard Thaler in 2008.
Cameron Passmore: With that, Ben, I suggest we go to our conversation with the author of the most recently released book, Decisions About Decisions, Professor Cass Sunstein.
***
Professor Cass Sunstein, it's a great pleasure to welcome you to the Rational Reminder Podcast.
It's a great pleasure to be here. I'm going to be trying to be rational, and I see the reminder in the background, which is a good reminder of the importance of rational.
Well, let's jump into a topic that's a favourite of all of us here, decision-making. What makes it difficult to understand why people make the decision that they make?
Well, the first thing we try to do is use introspection, so we might think we made the decision because we figured out that this is what's in our interest or in social interest. But it might be the fact that it was a sunny day, or that our best friends a little mad at us affected how we think. Introspection might not be very clarifying about what actually motivated us to do something. We might make an economic choice, we might buy or sell something, and it might be just because the sports team that we really like, which shouldn't be the New England Patriots, at least when they had Tom Brady might be that that sports team won or lost, and so we're in a particular mood and figure out that's why we did it. That might be pretty hard.
What is a second-order decision?
Okay. This is near and dear to my heart. We make decisions, like to eat, let's say, chocolate cookies for dessert at lunch. And then we make decisions about our decisions, which might be on Tuesday, I'm going to eat chocolate cookies for dessert at lunch or which might be, I'm never going to eat chocolate cookies for dessert at lunch. We make decisions about our decisions, and sometimes these are very conscious. We might think, in case of a medical problem, we're going to go to the doctor we trust and basically follow his or her advice. Or they might be unconscious, and we're operating consistently with second-order decisions all the time that we never figured out that we actually put into place.
Why would someone want to make a second-order decision as opposed to an on-the-spot decision?
Life is hard is the basic reason. So you might think that if you're dealing with some economic problem, let's say, stock market's going haywire, or let's say, your salary just got cut, or you lost your job, you might think that on the spot, you'll be reckless or impulsive. Or you might think that on the spot, you'll be overloaded. And you might think having some sort of strategy in place will simplify life. I at restaurants, I don't particularly enjoy a really long menu, so I have a second-order decision. Which is, if there's something that looks good, that's quick on the menu that's visible, and one of the first things, I pick that. That's because the process of making a choice from 30 to 60 options and some restaurants is daunting. If there's something pretty good that just catches my eye, I'll pick that. Won't stress about it.
That's a great example. Can you give some maybe broader examples of second-order decision strategies?
I have a friend who has been divorced for about 10 years, and he's fallen in love. He's not sure whether he wants to marry the woman he's fallen in love with, but he really likes her. What he's decided to do is to take a small step, he's going to live with her, and see how that goes. She's good with that, she likes that second-order decision. I have another friend who thinks anything involving medicine, he wants to make all the choices himself. He listens to the doctor, but he thinks it's his body, his life, he wants to figure it all out. He's made a second-order decision, which is that he's going to become an expert in anything involving his health. That's not an option that most people I think would choose, but it is an option that suits him.
I have another friend who in the face of many decisions, just doesn't like decision burdens, does the equivalent of flipping a coin, just thinks whatever, and picks kind of randomly. That seems to be for this person, a simplifier of life, and hasn't created terrible trouble yet.
Because you mentioned the coin flipping, in the book, you reference some pretty fascinating research on that. Can you talk a little bit more about that as a decision strategy?
It's useful to distinguish between choosing and picking. In English language, sometimes the words mean the same thing. But let's suggest that choosing means deciding for reasons. And picking means, just, I'm going to go with that one, without thinking that reasons are the determinant. It might be that when you're at a grocery store, and deciding, for example, what pain reliever to get, you'll stress over it, and think aspirin, Tylenol. What kind of aspirin? What kind of what? Then, you might be there at the store, could be a pharmacy for a really long time, and that's choosing.
Or you might think, across a certain range, they're all fine. I'm just going to pick, that's a little like mental coin flipping. You can actually literally flip a coin, but that takes a little longer than just thinking. If that's the kind of soup I like, say chicken, I like chicken soup. I'm just going to get that one, and the difference among the various brands. I'm not going to try to choose, I'll just pick.
How do you suggest choosing a second-order strategy?
Well, I think that's a really fundamental question. The basic thing, I think, when we're stuck, this is a little University of Chicago, so indulge me if you would. Is we think about two things, the costs of decisions and the costs of errors. It's like any point in a storm, and the storm of hard decisions about decisions, think about the cost of decisions and the cost of errors. So if you just pick without choosing for reasons, the cost of decisions are zero. You're not thinking about it much at all, and that's positive. But it might be that you'll produce lots of mistakes. Like if you're deciding what to get at the grocery store, you may end up with a lot of things you don't like. and maybe you'll end up with things you're allergic to if you just pick. So that's errors.
Thinking about errors, we want to think about how many there are and how damaging there are. You're deciding where to live. If you just pick, you probably make some terrible mistakes, because some of the options are ones that you would not enjoy very much. Some places, some circumstances, you can reasonably quickly decide. I'm going to rely on an expert, call that the strategy of delegation. Delegation will have relatively low burdens of decision. You just say, for investments, there's someone I really trust. I have a friend, by the way, who I really trust. And you rely on that person. If the person deserves the trust, then the likelihood and magnitude of errors is pretty low.
Think how burdensome is it for you to figure it out, and think how many mistakes you're going to make if you go one route rather than another. Some decisions about decisions are really burdensome to make, like you have to become an expert. But once you do, you're not going to make any mistakes. Others are really simple decision-making strategies that can get you in a lot of trouble. The context really matters, which is a less boring answer I hope than it seems. Because intuitively, if we're deciding what city to live in, it's worth thinking pretty hard about it, and indulging the burdens of decision. If we're deciding what kind of fish to get for dinner tonight, probably the stakes aren't that high, and it's better just flip a coin.
Can you talk more about when it makes sense to delegate?
If you're thinking about something that is technical, and you lack the technical expertise, then delegation is a really good idea. If we're thinking about something for which you don't want have responsibility then probably delegating is a really good idea. If you're thinking about something that you dislike or despise trying to figure out, then delegating is a really good idea. It follows that if you really want to take responsibility, or kind of should, it might invite involves something involving your kids, then maybe shouldn't delegate. If it's something where the task of figuring it out is either fun, or at least really instructive, and maybe gives you some capital in your head that you can use for the rest of your life, then delegating isn't such a good idea.
If it's something where you know you can figure it out pretty well, because what you like, for example, and because figuring it out isn't that hard. It's not like spending six years in graduate school, then don't delegate. What I think is very cool about this is that it suggests some guidelines that all of us can use, and that will lead to different decisions among different people. Some people really like becoming financial experts of a sort, my dad did. He really like figure out what to invest in. I basically want my coauthor, Dick Thaler to tell me what to do. I don't particularly enjoy it very much. And I have an investment advisor, whom I trust greatly, and I want him to tell me what to do, but one size doesn't fit all.
How does someone determine whether they have a trustworthy delegate?
Let's deal with a couple of different settings. In medicine, health issues, people often devote at least some time to figure out whether their doctor is good at being a doctor, and actually knows their tastes and preferences. After a relatively short time, you can probably figure out whether your doctor is someone who you want to give a lot of decision-making authority to. I have some data on this, just in, the post date of the book, by the way. When people are figuring out whether to rely in many contexts on a human delegate or an algorithm. This isn't about making their own decision. It's about the delegate or an algorithm.
People basically are pretty good. They try to figure out what kind of human being is that, as someone who knows what they're doing, they have a lot of experience. What kind of algorithm is it? Is it something that has some bias? Is it something that has some expertise? Meaning, it has a lot of data in it. So people figure that out, you also want to know if the delegate is someone who really has your interests at heart rather than his or her own or someone else's. My colleague and collaborator, Danny Kahneman has a great phrase, it may be in the book, it's part of the spirit of the book, even if it didn't appear there. Where he said, "If you want to get advice on a decision, find someone who likes you, but doesn't care about your feelings." I think that's brilliant.
Meaning, if you go to a friend who cares about your feelings, the friend will tell you what you want to hear. If you go to a friend who doesn't like you, the friend tells you something that might be injurious to you. But if the friend doesn't care about your feelings, and is willing to tell you this kind of idea you have about getting divorced, that's a really terrible idea. That's good to know because you can trust the friend cares about your well-being, even though in the moment, the person is telling you you're being pretty impulsive and stupid here. That's a bit of a clue if you can find a delegate who cares about your well-being, but doesn't care about your feelings, that's great. And t had better be a delegate who knows what he or she is doing.
There are people in the world of real estate, who are not that expert. And there are people in the world of real estate who aren't that focused on you. I have my house in Massachusetts, where my agent was my sister, and she's really good at her job, and also, I believe she cares about my well-being.
So you mentioned a big decision, like getting divorced. What should people aim to maximize when they're making big transformative and irrevocable decisions like divorce, or maybe like having a child or changing careers?
This is one of the, I think, issues of great importance and difficulty, both for individuals once a decade, maybe a little more, and for people who try to think about this from the standpoint of decision theory. Let's suppose you're deciding whether to have a kid. You might think that I will become a different person, once I have a kid, in the sense that what I care about will be different from what I care about today. I have a good friend who was thinking, I don't know if I want to become the sort of person I'm likely to become if I become a dad. He said, I think I'll become kind of boring and obsessed with my kid, and who could stand that person. That's the person I think I'm going to become. He did become a dad, by the way. He didn't become boring at all, but he did become very obsessed with his daughter, as it turned out in a way that he could not possibly anticipate.
Let's get to use, shall we, and these are a little academic. But I think, despite that, it probably cool. One possibility is, it's just really hard. Because from the standpoint of you, now, to figure out whether you want to do something that's going to make you care about different things, it's basically impossible. If you're thinking, Are you going to become a monk? Are you going to completely change your job? Are you going to become a something where that's something means you'll care about things very differently from what you care about now? Are you going to become a hermit? Are you going to get divorced? Of course. Are you going to have a child or you're going to move to someplace that's fundamentally different?
Then, you won't really be you anymore, the sense that what you care about will be very different, then you can't figure it out. That's one view. I don't agree with that view, I think you should think about two things. One is your own well-being and the other is the well-being of others. If you're deciding whether to have a kid, you might think, "Well, I will have a richer life, not with respect of money, but with respect of meaning of life that will have more in it that's amazing." That will be a better life for me. Or you might think, if I have a kid, I will be struggling economically, and I won't be able to focus on the things I really focus on, say your friends and work. Then, I don't want to have a kid. I think those are both completely rational.
The thing to do when you're thinking about sometimes called opt-in when you do something that makes your life really different. I'm hoping everyone's listening can think of an example. I've been divorced and remarried, and those are pretty familiar examples of transformative things. If you change professions, if you do something that you couldn't have possibly thought you would do, but then you try it. Those are situations where your values and your inner self will change. The only way to think is, what makes your life better. Given what you care about today, and it might be that some decisions that are transformative will affect others, and you better take those on board. Because if you're throwing, let's say, friendships, and relationships out the window, your own well-being isn't all that matters. The people whom you're hurting, that also really matters. and that's part of the assessment.
Let's keep going on that. How important is maximizing, as opposed to, say, just going forward with big decisions like these?
There's a disagreement, and I'm on the maximizing side. Some people think when you're deciding whether to have a love affair, or whether to move to a remote island, or whether to take a year off, you just go for it. There's something, I think resonant about that. Because all of us who have – and I hope all of us have at least once done something major and different. The feeling is, you're going for it, and the feeling isn't maximizing. I'm kind of persuading myself to agree with the people with whom I disagree. Hold on, Cass, don't go there.
If you decide, let's say, to spend a year in some place, unlike any place you've been before, or if you decide to switch jobs, I have a friend who was in banking, who decided to become a professional athlete. He was good. He wasn’t that good. But he decided to become a professional athlete. He, I think, thought that's maximizing for me in the sense that given what I care about becoming a professional athlete for a period is a thrilling adventure. It's not just going for it, it's, I'm thinking, going forward is a very compressed way of saying, "This is what I want to do all things considered." If you just think I'll go for it without trying to maximize, you might end up in a ditch. I don't literally, but maybe literally.
I think this next question continues off of that topic. How do people decide whether or not to obtain information that could help them make better decisions?
Maybe the most fundamental of all, I had for a number of years, I'm still in a kind of reckless research project. It's reckless in the sense that any research project, you should have a hypothesis that you're testing. I've asked people in many nations now, nationally representative surveys, whether they want to know, for example, the number of calories in their food, what the stock market's going to be at the end of the calendar year. A lot of people don't want to know that, by the way, which is a startling thing. Whether people want to know when they're going to die, whether people want to know whether they are going to get Alzheimer's. Whether people want to know what their friends, and family really think of them, whether they want to know that, whether they want to know if their partner is having an affair.
I've asked zillion questions. Here's what seems to come from the sometimes startling results, that people care about three things. First, they care about whether the information is useful, can they do anything with it? Number of people don't want to know whether there's calories in their food, the number of calories. There's likely to be some calories in there, and they don't want to know the number. I think they think, I'm going to eat when I'm going to eat, and I don't want to know that. People, a lot of people don't want to know the side effects of medicines. I think they think the food, the medicine if my doctor wants me to do it, or if I want to do it, it's probably fine. Side effects aren't going to tell me anything I need to know. Of course, they want to know, is it valuable?
People don't want to know the year they're going to die? Because they think, "Oh, what can I do with that? Nothing, most people think. Second thing people think about is, does it make them feel sad or happy. Do they think it's going to make them have a better day or worse day? People don't want to know things that are upsetting to hear. That's why a significant percentage of people don't want to know what their friends and family really think of them. That's pretty useful. But if you learn that your best friend thinks you're kind of annoying, they love you, but they think you're kind of annoying, that it's not good. People don't want to know things that make them sad or scared.
Third, people have an interest in just learning things, because it satisfies something like curiosity. People want to know whether Shakespeare really wrote Shakespeare's plays, even though that's not very useful. The answer might not make them happy or sad. I really want to know whether dogs are descended from wolves, and in what sense? I'm really interested in that question, but it's not particularly useful for me, and the answer won't make me jump for joy or break out in tears. I think the most interesting thing about information seeking and information avoidance is the power of people's rapid assessment of whether knowing X, or Y, or Z is going to make their day better or worse. There's an ostrich effect where we respect to health and economic things, people don't want news that will be bad. That can create all sorts of problems in trying to avoid the bad. You don't know about the risk you might run into it.
That was absolutely fascinating. When do you think it makes sense for people to seek out information that might make them feel bad or sad?
Well, there are two things. One is, if it's going to make their life better to know it. So if you learn, for example, as I did not long ago that I'm allergic to shrimp, that's not a very cheering thing to learn because I like shrimp, eating them, not as a species particularly. On the other hand, if I eat shrimp, there's a risk that I'm going to have a real bad breathing problem. It's good to know what you're allergic to. Because even if you don't like being allergic to peanut butter or whatever, avoiding it can avoid some real distress.
If you learn, for example, that you're susceptible to cancer risk, you have a heightened risk of that, then maybe you can take some steps that will reduce the risk, that's a really good thing. First is, it might be that it makes you sad or scared, but it also makes your life better, partly because it makes your life longer. There's another kind of subtler thing, which is people are amazingly good at adapting to bad news. So if you get news that is not good with respect to your job performance, or your economic prospects, or what people think of you, or about your health, the day that you got the bad news is a very tough day. But the next day is better, and the day after that is just fine. So we know across the range, information that makes people upset, turns out to have a much more short-term effect on their well-being that they anticipate. People make a decision to decide not to obtain information in circumstances in which they probably would be better off if they would.
That's crazy. That's like the adaptation principle from positive psychology. People make errors and information-seeking based on adaptation.
Right. If you lose, learn now. If you're asked, do you want to know something? I'll give you a funny example. I'm a law professor, I teach. The other day, the teaching evaluations came out. I was in the hall kind of excited the teaching evaluations are out. We can find out we did. One of my law professor friends said, "Oh, I never read those." I said, "What?" He said, "I haven't read them for 20 years." I said, "Why don't read them." He said, "They're just going to upset me and make me mad at my students." But that's not so good. Some years, my teaching evaluations aren't as upbeat as I'd like. But when I read, it helps me teach better the next year. They say, he's not clear enough, or he goes into too much detail, or he's not organized. That's not a great day and I hear that. But the next day, it doesn't bother me even a little bit, and I'll try to avoid the problems the following year.
Very, very interesting. Okay. We talked about how people decide whether to seek information. How do people decide what information to believe, and when to update their beliefs?
To get it this, this is a frontiers issue where we know so much more than we did even 10 years ago. We need to have three concepts, one of which will be familiar. One is confirmation bias. If people hear something that confirms their prior beliefs, like I believe my dog is healthy and barks occasionally, that I'm finding that very credible. The noise [inaudible 0:28:26] in the background. Confirmation bias means, we tend to believe things that fit with our pre-existing views, and people just do that. There are two ways to think about confirmation bias. I think the popular way is to think it's kind of crazy, that why wouldn't we find disconfirming information as credible as confirming information feels a little bit self-interested. That's part of that. But it has a little more of a foundation than that.
If people told me that dropped objects don't fall, which I don't believe to be true. I won't believe that if they told me [inaudible 0:29:06], I'd feel that confirms what I believe, and confirmation bias would kick in. It's rational given your pre-existing beliefs to update, or not, depending on how it fits with what you think. There's the motivated part, and then there's the rational updating part. Take that as confirmation bias. Then there's a kind of subset called motivated reasoning, which is the emotional part, where people believe things that they want to believe. That means that if you tell me something about politics, or something about a politician, that I really am saddened by. I might think you’re bias, and you don't study very hard, so you don't know that the politician I love is actually God's gift and couldn't possibly have done or said that bad thing. So motivated reasoning is a second thing.
The third, the newer is desirability bias, which suggests that we believe things that we find it desirable to believe. So if you tell me that actually, the hair loss that I thought I had, I actually don't have. That it's an artifact of Zoom and some unflattering photographs, but I have a completely full head of hair. That's desirable to hear. Thank you for that. I will find that particularly credible. If you tell me something like my male Labrador Retriever is not so beautiful. He's okay looking, but he's not so beautiful, I find that highly undesirable, and I not want to believe it. There's desirability bias.
Now, what's really fun, I think, is confirmation bias and desirability bias, [inaudible 0:30:51] and will go in exactly the same direction. That what confirms my beliefs, I'll find agreeable, and credible, and that will be desirable, but you can pull them apart. If you think, for example, that you aren't very good at sports, but you're given information suggesting you actually are good at sports. That is disconfirming information, but it's highly desirable information. We have some data suggesting, and a horse race between desirability bias, and confirmation bias. Desirability bias is secretariat. It's the better horse. That is people will believe information that they want to believe, even if it is disconfirming of what they started out believing.
There's a very long-winded way of saying that what people believe fits with what they find it pleasing to believe, motivated reasoning, and was what they start out believing. That can mean, one last bet that good news will be more credible than bad news, with respect to almost everything, which can lead to unrealistic optimism with respect to, let's say, investments, and it can also lead to terrible mistakes in updating.
I think I remember in the in the book, you refer to those as sort of asymmetries in how we update our beliefs. How can people improve the symmetry and how they update their beliefs?
This is maybe my favourite part of the book in the sense that it was most fun to deal with. Thank you for this, if people go into a laboratory, let's say and are asked to rate themselves in terms of intelligence and looks, and then are given objective information suggesting they're smarter and better looking than they thought. They update, say, "Oh, great. I'm smarter and better looking than I thought." If they're given information, suggesting that they're less smart and less good-looking than they thought, they think the person who made that assessment is mean, and ignorant. And I am every bit as good-looking and every bit as smart as I thought. That's the asymmetry to which you point. That people either make a decision to update in a way that's asymmetrical in that way or it just happens because the motivation is so strong.
What we learned, my colleagues and I learned is that this works for climate change too. That if people believe that climate change is really bad, information that suggests it's really, really bad will be more credible than information suggesting that it's not so bad. If people believe that climate change is only a little bit bad, they will find that information suggesting it's very bad isn’t credible. But information suggesting it's not even a little bit bad, that is credible. So you have asymmetrical upgrading of very different kinds, depending on whether you think that climate change is a really bad problem, or only a little problem. That I think helps explain polarization, on political issues, also economic issues.
Also, things within the family even where spouses might update in very different ways. The best way to respond to asymmetrical updating is just to be cold-blooded, in your predictive judgments. Not like inhumanly cold-blooded, but to be very, very calm. If you hear, for example, something suggesting that your performance isn't as good as you hoped it was, to thank, okay, deep breath, I'm going to believe that even though it's not very good news. Or if you hear something suggesting that the economic risks you face, given your current portfolio, are serious and maybe you ought to change it. Not to say, I don't like that. I don't believe it. But instead, to say, that might be taken a little more seriously even, that information that made me to smile and jump for joy, and to try to be a maximizer with respect to information.
I'm curious, what might deter someone from updating their beliefs when they are presented with new evidence?
Okay. There are two things. Our data on climate change doesn't enable us to see which one works, but there are two that are explanatory. Suppose you think that climate change is a really serious problem, you're given information suggesting it's an even more serious problem than you thought. That's very credible, so you're likely to update. Now, suppose you believe climate change is a serious problem, and you're given information suggesting it's not that bad. You might think, who paid for that information. Given your previous belief, that information just is suspicious. I don't believe it, given my belief, my initial belief. Let's call that just rational evaluation, giving your prior judgments.
If you read something suggesting the Holocaust didn't happen, that might not motivate you to doubt the historic fact of the Holocaust. You might just think, who wrote that? What's their agenda? That's the rational account. That's consistent with the data we have. If someone thinks I'm really good-looking, that my life is consistent with that, and then you get information suggesting that someone thinks you're not good-looking, you might think I'm not going to believe that. That person's just in a bad mood or trying to get me down. Okay.
The other is that if you get information of certain kinds, it's just going to make you mad or sad. Your natural reaction is to think I'm not going to believe it, because I don't want to be mad or sad. An example of this, and probably comes from the writing of the Star Wars movies, that is the George Lucas movies. Where George Lucas was in a pitch battle with one of the great screenwriters of all time, Lawrence Kasdan, about what would happen in the third of the original series. If you don't know Star Wars, hang with me for a moment, I think you'll get it. Lawrence Kasdan said that, "Luke should die, Luke Skywalker, our hero should die." George Lucas said, "Luke Skywalker isn't going to die." Then Lawrence Kasdan said, "Well, Princess Leia should die". And then George Lucas said, "Princess Leia isn't going to die."
Then Lawrence Kasdan gave a kind of poetic speech, about how in art, if you lose someone you love, the depth of your connection with the work grows. In great art, someone dying is actually a pivotal moment that cements the audience in the work of art. George Lucas responded really quick, this dialogue is all captured in real-time, said, "I don't like that and I don't believe that." That's amazing for our purposes. I don't like that and I don't believe that. Not liking preceded not believing. Lucas thought, I think, the idea that someone you're going to die makes the audience like the movie better. These are my characters. I don't like that. Then he says, I don't believe it. That's the basic non-rational account, though I think Lucas called exactly right on Star Wars, that if you don't like something, you won't believe it. That's not about rational updating. That's about denial of unpleasant truths.
Is it safe to call the items that we just talked about, the asymmetries, and updating information, and beliefs. Is it safe to call those biases?
I think so if we emphasize the motivated nature of updating. I think not if we emphasize the rationality of updating. And you see this, you're putting your finger on something that the data doesn't sort out. In the climate change work, what we don't know is whether strong climate change believers discounted evidence of, it's not so bad, because they just thought it wasn’t credible, and probably someone bought that material, or what just upset them. Because they're invested in thinking climate change is a big problem. That's part of the way they live in the world. We don't know which is more powerful. I have a hunch, which is, I think the motivations are more powerful, but both operate in different contexts.
Suppose you have really good health, and then you go to a new doctor, and the doctor says, I think you have pretty bad problem. You might think, "I don't know who this doctor is. I'm not sure why I went to see that doctor." That need not be motivated. It might be based on 15 years of great health. I want a second opinion, not because I don't want to believe what this new doctor said, but because I think it's not a real doctor, so which doctor.
What causes people to make inconsistent decisions?
That's a fantastic question, and there are a million reasonable answers. Let's give two. One is mood. I was in a group of friends soon after I married my wife. A group of maybe six male friends, I said, "I knew within two weeks of meeting my wife that I wanted to marry her, just true." One of my male friends said, "If I'm married every woman whom I knew I wanted to marry within two weeks of marrying her, I would have 17 ex-wives." I'm happily married, I hasten to had 15 years in. But my friend was saying that you can be inconsistent because of mood. So if you are all excited, and charged up for one or another reason, you might make a decision on Monday, that would be radically different from what you do on Tuesday and Wednesday.
The factors that influence mood are innumerable, and sometimes they're not even visible to us in real-time. If your city has a very sunny day, you might take a lot of risks that you wouldn't take if it was raining and bleak. And you might have no idea that the weather caused your risk-taking.
One thing that I'm particularly interested in is, whether you're choosing something in isolation, or whether you're choosing something in the context of other options. So let's say you're deciding which laptop to buy. If you see a laptop, I'm seeing a laptop right now, it's a good laptop, you might think that's a great laptop. I want that laptop, if you see the laptop I'm seeing right now, in the context of some other laptop, you might think it doesn't look so good, the screen is kind of small, and the bezels are kind of big. And the other one, the screens bigger, and hardly any bezel at all, and I want that one. Now, it might be that in isolation, you would be thrilled with your laptop, your house, your investments, your doctor, any product you have at home. But that if you were comparing it to something else, you wouldn't be so thrilled with them.
Here's the upshot. In separate evaluation of things, we may prefer A over B. But if you see A over B, A and B together, you might choose B over A. Smart marketers are very well aware of that, and they might tell you when you enter the store, "Here are your two options." And they know they can get you to get, let's say, the more expensive option because they showed you the other one. Even if you saw the two in isolation, you would choose the less expensive.
Three of us are in relationships. I think we're all in happy relationships. Can you talk more about how evaluating a big purchase like a home jointly versus individually affects the decision?
Let's say that you have a house which has a big yard and beautiful views that's very far from your work. You might think in isolation, that's not a good choice. Because even though it's an amazing house, the commute will be terrible. If that's the only house you see, let's say in August, you might think I'm not going to get that house. Now, let's suppose in September, you're shown another house, which doesn't have amazing views, and isn't that big and beautiful, but that is really convenient for getting to your work. You're now just seeing that house in September, the other house isn't on your view screen anymore. You think that house is the one for me, because it's a good house, and because my life is going to be completely fine given the ease of commute. Then you get let's say the less good house with an easy commute.
But if in August, you would have been shown the two houses, there's very good chance the data suggests that you go for the better house with the bad commute. Because you might well think, look, if they're priced the same, a beautiful big house is just what I want. A house to live in. It's not to commute to and from. Which means that a characteristic of the house that in separate evaluation that is it's not that big and not that beautiful, might seem like a mild negative. Will in joint evaluation, where you're comparing it with a huge and gorgeous house look pretty disappointing. This can be done for home purchases, for vacations.
Do you want a vacation that's far away and in the most beautiful place imaginable? Or do you want to vacation a place that's pretty close and very good? Enjoy devaluation, there's a good chance people go to the faraway vacation, and the magnificent place because a vacation is for magnificent places. In separate evaluation, they might be willing to pay more for the place that is close on the ground, that proximity of vacation to many people really matters, because they don't like a terrible commute. Either a little bit of exotic examples, but it works every day in stores. If you're shown a product, let's say a cell phone, you might think that's great, I'm going to get that one. But if you're showing that cell phone in the context of another cell phone, it might be some characteristic that you wouldn't give much weight to in separate evaluation, starts to loom large, because that other cell phone does better along that dimension that you wouldn't even notice or care about unless you saw the two together.
It's a mind-blowing stuff to think about because we make decisions like this all the time. Is one of separate evaluation, or joint evaluation objectively better than the other for most decisions.
The cool kids say that separate evaluation is better. If there are any academic cool kids, this is what they think. The reason is they think that once you get, let's say, a laptop, or a house, or something that involves music, and guitar, something that plays music, you're not going to be comparing it to the other thing all the time. Unless you're a weird person, you're going to be enjoying it in separate evaluation. They say, get the thing that in separate evaluation is fine. So if it's, let's say, something that has good sound quality, but not a astounding song quality, that's really beautiful and fits, don't reject that in favour of the thing with even more amazing sound quality, because you just won't notice. The cool kids say separate evaluation is better, and they're usually right.
The reason they might be wrong is if in joint evaluation, your attention is concentrated on something in the product that really does matter, that you wouldn't evaluate or pay attention to unless you saw that too. That's too abstract. Let's say you have two cell phones and one of them has an incredible camera, and one has a very good camera. Now, for most of us, separate evaluation is what matters, and a very good camera is plenty good enough. But let's say you're a photographer, or you live for amazing photographs, it really matters to you. In separate evaluation, you won't realize that the cameras merely very good. In joint evaluation, you'll see there's another one that's incredible. For that person in that choice, joint evaluation is better because it puts a spotlight on something that really should matter to you.
This next question is the one I had in my mind as I read your book. How should people decide what to value making decisions, like for example, maximizing well-being the right objective for all decisions.
There is some philosophy here. Maybe the greatest political philosopher of the 20th century had a footnote and a manuscript which he never published. Which says, we post a signpost, no deep thinking here, things are bad enough already, but we're going to do a little deepish thinking that it needs. I say that across a very wide terrain, our decision should be based on our well-being. I'll qualify that in two ways in a moment. But if you're deciding what to do with respect to the summer, or what to do with respect to fall, an essential question, and possibly the only question, qualifications to come is what makes your summer or fall better.
Better can include various things. Let's note three that people seem to care about. What makes you happy? What gives you a good week? Are you going to be smiling or scowling? Second is, what makes your life meaningful? Are you going to be thinking, I've binge on seven TV shows and I really enjoyed it. But what the heck was I doing with my time then, it's kind of meaningless. The third thing is, is their variety. It is psychologically rich. People will sacrifice happiness for the sake of variety they want. I went to tennis match last night, by the way, didn't really want to, a professional tennis match. My wife said, give you psychological richness. You haven't done this for a long time. Was that meaningful? Did I enjoy it? Some. Am I glad I did it? Absolutely, because it was something different, something I don't normally do.
People care about happiness, meaning, and psychological richness and they balance them in different ways. That's about well-being. That's I think, very fundamental to our decisions. We can get very intuitive about it or we can get very kind of elaborate about it, but that's a lot of the terrain. It's also good to do the right thing. So if, let's say, a successful act of theft would make you happy because what you stole was really great. Still shouldn't do it, it's immoral. That is a qualification to promoting welfare. We can all discuss what's right, and what's wrong, but there's certain number of things that are clearly wrong. Then there's the welfare of others. So might be that you'd spend some hours or maybe some weeks working to help people who are struggling or trying to help maybe your kid. It might be that that's not a very joyful thing to do. It might be, but it might not be if it is, that's not why you're doing. It might be meaningful, but if you're doing it, you're probably not doing it.
If you're helping others, it's not because you're thinking, ‘I want to have a meaningful life’, it's because you're thinking ‘I want to help others’. And it might be that something you're doing that you think is good for people you care about, or people generally adds richness to your life, and that's great, but that's not why. So well-being, I would put very much in the forefront, but not one zone. Maybe let's talk about John Stuart Mill, with your indulgence with sort of second. John Stuart Mill was a great philosopher, as utilitarian. He cared about well-being, but he wasn't a very emotional writer most of the time.
At one point on the question you asked, he got full of fire, and he said, "If you think about utility or well-being, what's maximized is not the agents on, not the decision-makers. It's everybody's." Then he says, the only time I think Mill ever referred to Christianity. He said, and this is chills down the spine stuff, "In the golden rule of Jesus of Nazareth, do unto others, we find the complete spirit of the ethics of utility."
Wow. You did say that we're going to get into some philosophical stuff, and we did.
A warning, warning label philosopher.
Yes, warning.
Proceed with caution.
I don't know if this next question takes us in the completely opposite direction. But when do you think that algorithms are useful for making good decisions?
We have data suggesting that if doctors are deciding whether to test people for cardiac incident, including a heart attack, they do well, but they don't do as well as an algorithm. The reason the algorithm appears to do better is doctors overweight the current symptoms. If it's a young woman, the algorithm will give that, let's say, roughly, it's appropriate way. And the doctors will give it a little more weight. In deciding whether to test someone for having had a heart attack, rely on the algorithm, it looks like, at least pay very careful attention to the algorithm.
If you're trying to decide whether, let's say, a song is something you like, it may well be that a good algorithm will nail your tastes. And even someone who knows you very well, like your best friend, your spouse won't do that, so rely on the algorithm. If there's a ton of data that the algorithm has, that isn't biased in any relevant respect, chances are, it's going to do really well. But you might know, wonky algorithm even so, if you like deciding for yourself, or if you want to take responsibility for the choice that affects you. The algorithm might be a little bit of science fiction world, in which you get the right outcome, but you didn't get it in the right way. Some people will think with respect to, let's say, vacation planning. I can find an algorithm that will tell me what vacation I like. But the point of a vacation includes preparation for, and figuring it out, seeing the options. Even if the algorithm says go to Key West, young man. I'd rather figure out for myself whether Key West is where I should go.
You said something earlier that stuck with me as well with kids, that you probably want to decide yourself when it's related to your kids.
Completely. Then with your kids, the feeling of fobbing off decision to an algorithm feels like you're not really being a parent. There's also learning, so this is something I have some data, unpublished data on, that people often want to make choices, and they decide, and make a second-order decision to decide. Because they think that the process of doing that will give them an asset that they can use. They might like it, but they also will learn, let's say, about vacation spots, or about cars, or about something that matters to them. The algorithm could make the right decision, but then they would know anything about vacation spots and cars.
Are there are other cases where algorithms are not useful.
You have exactly the right time for that question. Meaning, this is maybe the question for our era. Algorithms seem pretty bad at figuring out who you're going to fall in love with. They really struggle to know that. They don't do well in figuring out whether a song is going to be a hit, and they don't do well in predicting whether there's going to be a revolution. Now, what is it about those three things, and I give three examples, because they have something in common. We have really good data suggesting they don't do well in predicting who's going to fall in love.
Then, if there's social interactions that can go in any number of directions, that are hard to anticipate in advance, then the algorithms aren't going to be helpful. So whether there's going to be a revolution, or a rebellion depends on lots of moving parts. Maybe you can say something about the probability that the parts will move in a certain way. But maybe, you can't say a lot even about the probability. Depends on who says what, who is noticed by whom, to have a rebellion, a ton of things have to happen kind of on the spot. A romantic spark is a little bit like that. It might be the two people meet, and one person happens to say something, and the woman, let's say, says, "God, that is really a creepy thing to say, or pretty clueless." Game over. That guy has no chance.
But if the guy had said something else, or just silenced himself at that moment, she would have given him the chance and maybe sparks would have flown. Or maybe she was feeling really great because something good happened at work that day. And so she's up for fun, and she's hilarious, rather than grim. And he's smitten in the first 10 minutes. It's just because things were good at work that day. How can the algorithm predict that she's going to have a good day, Thursday at work. So if there are multiple factors that feed into social interactions, it may be that algorithms will really struggle.
A more mundane thing, I'll tell you about judges, if I may. That algorithms outperform judges in deciding who should get bail. The reason they do is that algorithms do a bit better than judges in predicting who's going to flee. They outperform the judges. It's kind of spectacular data because it suggests, in a big city, you could jail the same number of people and have a lot fewer crimes, because the algorithm would figure it out better. Or you could have the same number of crimes and jail, many fewer people. Just make your choice, the algorithm will get you there. But here's the but, 10% of judges. the best 10% outperform the algorithm, they do better. This isn't like about falling in love or about whether some song by a new group is going to be a big hit. It's that people sometimes will know things that are either impossible or really hard to program an algorithm to notice.
Now, it might be that the 10 – we don't know why the 10% of judges crush the algorithm. It might be that they know something from experience that the algorithm hasn't been programmed to take account of, and that might be really hard to do. Or it might be that something happens on the spot where the judge sees that defendant, it says, that defendant is a good person, even though their last 10 years have been pretty rough. The algorithm can't see that.
Okay. So we cover when algorithms are maybe not so useful. Can you talk about cases where they're harmful?
So you could have an algorithm that is going to predict, let's say, whether a plane is in trouble or not, then that tells a pilot what to do. That doesn't capture something on the spot that the pilot will be alert to. So might be by training or experience, the pilot knows this plane is actually not in trouble, or that that solution to the problem is going to make things worse. And the algorithm is too mechanical and rigid to capture that. They actually have data in some domains that are quite like that, that suggests algorithms will not take account of the full set of things that we should consider, and that's a big problem.
I'll give you an example from personal experience, which is grading. Where an experienced teacher will pay attention to maybe 30 things, and to program algorithm to do that is very challenging. To program an algorithm to take account of 20 things might not be so challenging. If you're thinking of doctors, or pilots, or engineers, or real estate agents might be similar. It's also true that the garbage in garbage out principle might ruin algorithms or impair their performance.
So if you ask an algorithm to say, when their employees are going to leave the workplace. It might end up being a discriminator against women, because we see that women, let's say, in the relevant place are more likely to leave because they're going to be full-time mothers for a period, and then you hire mostly man, and that's very bad. And then a human being might think, they know the women are more likely to leave than the men are, but I'm going to give everybody a chance.
We've talked about shrouded attributes in complex financial products on this podcast before, that there are also things like dark patterns in software. How should people deal with the possibility that their decisions are being manipulated?
The word manipulation is easier to use than to define, let's say roughly, it's where a situation overrides your capacity to make a rational choice. That might be because some characteristic of the product, let's say isn't visible to you. It might be because your emotions are triggered such that you think, I've got to get that like now, and you're being manipulated through hope, or fear, to get the relevant thing. I think it's from the standpoint of private and public conduct, we want a right not to be manipulated. Just the very idea of a right not to be manipulated is probably a step forward. We want to specify it by forbidding things that clearly violate that right, as when people are automatically enrolled in terms that they can't easily see, and that aren't in their interests, like you're going to make recurring payments that aren't very visible.
I think for all of us, before we make something that's costly to do, maybe 20% more due diligence than we paid a client to do. That's a very good idea, especially if it's online.
Your work on nudges is widely known, widely applied, and certainly impactful. I have to ask you, how are nudges different from manipulation?
Manipulation, we're kind of defining as bypassing or not enlisting people's rational capacities. Some nudges are educative, like a reminder. So speaking of rational reminders, so if you get a reminder on this phone that says you have a doctor's appointment tomorrow, actually do. So this is a salient example. Thank you for helping me to remember that. That is a non-manipulative act. It is educating me about a thing. If you go to the grocery store, and there's a warning that if you're allergic to peanuts, don't buy this, that's not manipulative, that's educative. If you're given disclosure about the side effects of medicines that says something about if you have X, or Y, or Z, you probably shouldn't take this medicine. That's not manipulative. So educative not just wouldn't be manipulative.
There are others that are architectural, and then we have to identify them to figure out whether they might be manipulative. One kind of architectural nudge asks you whether you want to do this, or whether you're sure you don't want to do that. So if you're about to delete all your files, your computer might say, "Are you sure you want to?" Or if you're about to make a very big purchase, the same question might be asked, and that's not manipulative. That's an effort at enlisting your deliberative capacities to make sure you're not being impulsive or reckless. So those nudges wouldn't be manipulative.
Graphic health warnings may or may not be manipulative, depending on what they are and what they do. So if you're given a graphic health warning about smoking, we can have a discussion of whether that's manipulative. I think the data suggests it isn't, that a graphic health warning for cigarettes, which is a nudge actually informs people of smoking. So it doesn't bypass or exploit their intuitions that actually improves their judgments, but that's one to discuss. If you're automatically enrolled in something without knowing it, that's a pretty bad nudge. And it wouldn't be a nudge that Thaler and I would approve, and it would be manipulative. If you are automatically enrolled in something, and you're told you're automatically enrolled in this, do you want to opt out? Then, it wouldn't be manipulative. So I think we'd want to go through the universe of nudges, the ones that are used by governments, at least the government side like I'm familiar with. It's hard to find any of them as manipulative, but graphic warnings and automatic enrollment at least would be worth discussing.
Our final question for you, Cass. How do you define success in your life?
I'd be a little focused on the three variables we've discussed. Happiness is a big part of it, whether your days are great or not. Whether you feel there's meaning in your life. I had a friend in the White House, I saw actually in the White House one day. I asked him how things are going, and he gave me a surprising answer. He said, "My day-to-day happiness is terrible, but my sense of meaning is fantastic." And I thought that was a little more information than I had asked for. I just said, "How are you doing?" But it was instructive.
Happiness, a sense of meaning or purpose is really important to that, as is a sense of joy or suffering, and they're different, and the sense of whether you have variety in your life. So you could have a thousand happy days in a row, but they can be the same. That's not successful, or as successful as if you had the same days with variety. So variety, meaning, and let's say happiness in the sense of ordinary language.
Great answer and you made today a great day. Thank you so much, Cass.
Great pleasure. Thanks to you.
Is there an error in the transcript? Let us know! Email us at info@rationalreminder.ca.
Be sure to add the episode number for reference.
Participate in our Community Discussion about this Episode:
Books From Today’s Episode:
Decisions About Decisions: Practical Reason in Ordinary Life — https://amzn.to/3OTi2Un
Nudge: Improving Decisions about Health, Wealth, and Happiness — https://amzn.to/3YwcEtz
Links From Today’s Episode:
Rational Reminder on iTunes — https://itunes.apple.com/ca/podcast/the-rational-reminder-podcast/id1426530582.
Rational Reminder Website — https://rationalreminder.ca/
Shop Merch — https://shop.rationalreminder.ca/
Join the Community — https://community.rationalreminder.ca/
Follow us on X — https://twitter.com/RationalRemind
Follow us on Instagram — @rationalreminder
Benjamin on X — https://twitter.com/benjaminwfelix
Cameron on X — https://twitter.com/CameronPassmore
Cass R. Sunstein on on X — https://twitter.com/CassSunstein/
Cass R. Sunstein — https://hls.harvard.edu/faculty/cass-r-sunstein/