Economics

Episode 204: Prof. John A. List: Improving the World with Economics

John A. List is the Kenneth C. Griffin Distinguished Service Professor in Economics at the University of Chicago. List joined the UChicago faculty in 2005, and served as Chairman of the Department of Economics from 2012-2018. Prior to joining the University of Chicago, he was a professor at the University of Central Florida, University of Arizona, and University of Maryland.

His research focuses on questions in microeconomics, with a particular emphasis on using field experiments to address both positive and normative issues. It includes over 200 peer-reviewed journal articles and several published books, including newly released "The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale".


John List is the recently appointed Chief Economist at Walmart, and is also a Professor of Economics at the University of Chicago, having worked as the Chief Economist at Uber and Lyft. He has published a huge array of important papers in the field of economics and is also the author of the recent book The Voltage Effect, which deals with the question of how to scale ideas successfully. We are very excited to bring you this episode, which is a particularly illuminating one, in which we draw on John's treasure trove of insight and experience, to answer a long list of questions related to personal finance decision-making. A large portion of our chat focuses on the central ideas of critical thinking and fieldwork, practices that our guest views as indispensable in making the world a better place. Along the way we get John's thoughts on retirement planning, public policy, charitable donations, and much more, so make sure to press play on this fantastic episode of the Rational Reminder Podcast.


Key Points From This Episode:

  • John explains the importance of fieldwork in the study of economics. [0:03:51]

  • Examples of field experiments that overturned a supposed economic truth. [0:05:15]

  • Finding ways to test theories that previously proved difficult. [0:08:30]

  • The question of generalizing findings from an experiment to a wider rule. [0:13:30]

  • Replication in academic studies; John unpacks its central importance. [0:20:46]

  • Why positive results tend to garner a publication bias. [0:23:38]

  • John's perspective on checking in on investment portfolios. [0:24:40]

  • What the data shows us about investment behaviours of men and women. [0:28:38]

  • Accounting for the drive to give to charity. [0:35:20]

  • Advice for how to make the most of your donations. [0:39:42]

  • John unpacks his findings on scaling, its importance, and what he calls 'the voltage effect'. [0:44:41]

  • The impact of technological advancement on our ability to scale certain solutions. [0:48:27]

  • How field experiments can influence the process of scaling big ideas. [0:54:47]

  • Hindrances to healthy scaling; confirmation biases, and herding. [0:56:17]

  • Impacts of loss aversion and marginal thinking when scaling ideas. [1:05:28]

  • Reasons for the difficulty of tackling globally important issues; multidimensionality and politics. [1:15:10]

  • Weighing the utility of incentives when trying to encourage retirement savings. [1:19:16]

  • Thoughts on bringing more reliable science into the policy-making process. [1:21:26]

  • How parents can approach the promotion of critical thinking in their children. [1:25:45]

  • John's approach to the questions he pursues; how he evaluates potential ideas and questions. [1:31:10]

  • A little bit about John's new post as Chief Economist at Walmart and what the job entails. [1:33:53]

  • How John defines success at this point in his life and his focus on inputs. [1:33:53]


Read the Transcript:

So off the top, I want to congratulate you on your most recent book, The Voltage Effect. I thought it was terrific and we'll get more into the book after. But first, wanted to kick it over the question about your specialty, which is conducting field work in behavioral economics. So why are field experiments important to the study of economics?

Absolutely. So first of all, thank you very much for your kind wishes on the book and thanks for having me. So I want you to think about field experiments as a person uses the real world as their lab. And when we generate data in this way from the real world, it allows us to make a causal statement about something, like for example, smaller-sized classrooms lead to better student outcomes, or when we lower prices for good more people buy the good. These are causal statements that we have to be able to make from the social sciences and from the hard sciences to make the world a better place.

Field experiments allow us to make much stronger statements than if we just use data that's generated from the world and we go and analyze it. I think that's a key reason why field experiments are a very important approach to not only economics, but to the world.

Do you have any examples of field experiments that overturned our prior assumptions?

Oh, sure, gosh. Where do you want to start? Let's start in charitable giving. So in charitable giving, there was a mantra about 25 years ago that would go as follows. If a fundraiser wants to raise money, it is important for them to use what's called a matching grant. What that means basically is if it's a one-to-one match, if you give us a hundred dollars, we're going to match it one to one and that way the charity gets 200. So the general idea was if you use a higher match, you can bring in more money, which sort of makes sense, right? If the money's doubled or tripled or quadrupled, that becomes more and more attractive so more people should give and they should give more. In economics, that's called the law of demand. As price goes up, people consume less of it.

Now, when I started doing field research in the late '90s on this issue, I would talk to people and I would say, "Where's the evidence for it?" It makes sense intuitively, but where the data? They said, "Well, we don't really have data." It's a gut feeling so I gave it a try. I gave it a try, and what we did essentially was we have a control group and that control group just receives a letter asking for money, and then we have three other groups. One of the groups is told it's a one-to-one match, another group gets two-to-one, and another group gets three-to-one. These are just households that are randomized into these groups.

Guess what we found? First of all, if you have a match, the one-to-one, two-to-one, three-to-one, just having a match matters, you raise a lot more money when you have a match. Okay, great. How about the second finding? The level of the match, does it matter? We find it does not. So again, three-to-one does not bring in any more money than two-to-one. Two-to-one does not bring any more money than one-to-one. So you are completely wasting your match dollars. By using this rule of thumb or what people always knew was the truth, it's not. Having a match matters, but the match amount does not matter.

Now you can ask, "How far can we push this?" We observed the same thing in retirement. So you have these retirement programs such as, we will match dollar for dollar up to 8%. Okay, what if you matched a dollar for every $2 up to 16%, would people save more money? Yes, they do. Even though the match is exactly the same, a lot of times people want to save all the way up so they don't waste match dollars.

So that's a simple way in behavioral economics that we can change the framing or change the way that we present information that can affect how people retire, how people give to charitable foundations or charitable organizations, et cetera. We would not know that if it was not for field experiments.

Now, do you have an example of an economic theory that was previously viewed as untestable, but was successfully tested experimentally?

Yeah, that's good. Let me roll out two. The first one is going to be simple supply and demand. So we all know that the bread and butter of economics is supply and demand. We think prices are going to tend toward where supply and demand intersects. That's called equilibrium. That's a major prediction from economics. Well, go out to the real world. Where do you find supply and demand curves? They're not just floating around for you to grab and test.

So what you have to do, and this is exactly what Ed Chamberlin did in the '40s at Harvard and what Vernon Smith did in the '50s and '60s as one of the pioneers of laboratory experimentation, is they brought students into the lab and they gave each of them a demand and each of them a supply. So that way they knew the demand curve, they knew the supply curve, and then it had predictions about where the prices should end up. They found that neoclassical economics did a pretty good job. That's one type of theory that's very difficult to test.

Now, something that I've done recently is on discrimination. So in economics, we have two major models or theories about why people discriminate. One theory is due to one of my colleagues, my former colleague named Gary Becker. Unfortunately, he passed away. Gary Becker wrote that people discriminate against others because they simply get satisfaction out of hurting somebody else called bigotry or animus. Okay, that's an interesting theory. Another theory that Piguo talked about, Arthur Piguo years ago talked about what's called third-degree price discrimination that people treat others differently because they want to make more money and they're simply after making more cash.

So now you have these two major economic theories. Becker says, "People will forego profits to cater their prejudice." Piguo says, "No, people are going to discriminate against some just to make more money."

Discrimination in data is ubiquitous, but trying to figure out which of those theories is at work is impossible unless you run an experiment. That's what I did back in 2004. It's a paper titled, "The Nature and Extensive Discrimination in the Market," and you can set up the field experiment to explore. First of all, is there discrimination? And secondly, what is the underlying motivation for discriminators in that market? That ends up being very important because our public policies in some cases are written to take on Becker kind of discrimination and in other cases, they're written to take on Piguo kind of discrimination. So if we don't know which one's going on, we don't know which policy is the correct one. Field experiment can really help you.

Unreal. What do you think about the criticism that economics is just too complex for field experiments to be useful?

Yeah, gosh, it's the exact opposite. In fact, the world is complex or the world's super complex, but that's why we need field experiments. We need field experiments for two reasons. One, they allow us to chop up the problem in more digestible and smaller pieces that we can begin to understand. That's 0.1.

0.2 is when you go into a really let's call it dirty environment like the world, you have simultaneously moving parts. Prices are always changing. Quantities are changing. The players are changing. Everything's moving at the same time. It's very difficult in that kind of setting if you don't have experimental variation to make a causal statement. Because using different kinds of approaches, they require different kinds of assumptions and in many cases, it's very difficult to go beyond correlations in these really, really messy environments. But if you have experimental randomization or you know treatment assignment, it's a lot easier, or at least you have to make different assumptions to say something causal, then you do using the traditional empirical approach.

So I think it's quite the opposite. I think the messier the environment, the more complex the problem. That's when we need field experiments even more because they allow us to chop up problems in finer ways, and they allow us to identify causal relationships using different assumptions.

Wow. That's pretty compelling. But how well do these experiments or the results of these experiments tend to generalize to a wider population?

Yeah, that's a good question. Let's be clear though. Any empirical result faces the generalizability question. Whether it's data from an experiment or data that I download from the internet that comes from the census, all empirical analysis is subject to the question, does it generalize to other settings? We should make that question and we should do empirical work to figure it out.

Now where I am on this issue is every empirical result is valid for some setting. No empirical result will be valid for all settings and experimentation and generating new data help us learn. What are the reasons why a result might generalize? What are the reasons why results might not generalize?

Now my first instinct is always to look first of all at the people themselves. Okay, so I have a result and can I transfer that result across different people? There, I start by thinking about preferences, beliefs of the people and constraints. If the preferences, beliefs, and constraints of the people in the experimental setting are similar to those in the target setting, then I'm going to have some confidence. But if, for example, a constraint like wealth, maybe I do an experiment in a wealthy community and I'm trying to generalize to a community that's not so wealthy. Well, now I better have a theory about how the results of the data will change. When I go to a community that's not as wealthy, that's a constraint that they have greater wealth constraints. And then I have a prediction from my theory about whether the results will generalize.

Okay, then you talk about whether results will generalize across situations. This is important because in my own data, even though many times people talk about will these results hold with different types of people, what I've been finding is the results more easily change across different situations. The results across people are much more stable than the results across situations. I think that's always something important to consider whether your results will generalize across individuals, but also across situations that individuals are in.

As an example. I used to be the chief economist at Uber and Lyft. I'm now the chief economist at Walmart. So at Uber and Lyft, we would always look and explore what's called the price elasticity of demand. So the price elasticity of demand tells us how responsive people are to price changes on Uber or Lyft. Now one thing that would importantly influence the size or the magnitude of that price elasticity of demand would be, are there available substitutes that are cheap and the person can use like rail or bus? That's the situation now. The situation changes and if it changes to allowing or having more substitutes, people are much more responsive to price changes. That means that we have to be careful when we generalize to those kinds of situations.

That is fascinating. Does that mean in the Uber and Lyft context, is that like surge pricing would be different in a city that has a good subway system or something like that?

Oh, 100%, 100%. So we have a paper that just came out a few months ago that is titled, The Value of Time Across the United States. What we do is we measure how valuable people's time is by changing both the price and the ETA's that they receive when they open up their app. So when you open up your app, we do experiments now on giving you a higher, low price, in giving you three minutes till we pick you up till nine minutes till we pick you up. Then we can see how people are trading off that time and price valuation, and they will trade it off in very predictable ways based on whether they can just jump on the rail. That's right.

That is fascinating. I missed that paper when we were preparing to speak with you, but that sounds very interesting.

Cool, thanks.

What was the result there? Did you find like a number that people value their time?:

Oh, gosh. Yeah, yeah, yeah, yeah, yeah, that's great that you bring that up. So what we find is that people value their time at about $19 per hour. It varies predictably with like if it's morning commute or afternoon commute or whether it's a business traveler, whether you're going to the airport, whether you have a close substitute near you. Now what's interesting is that $19 is much larger than the federal government uses to value public projects. Because remember, every legit government is doing benefit cost analysis. One thing that they do benefit cost analysis on, of course, is infrastructure and spending on bridges and internet, et cetera. A lot of these products and services have time savings. So they have to attach a dollar value in terms of benefits.

What they do typically is they do roughly a half to three quarters of the local wage and then that's the value of time that they put on it. We find that it's much higher than that. What that means is that the federal government is undervaluing infrastructure and new projects that are saving people's time. That's the kind of paper that we couldn't have found that out if it wasn't for the field experiment. So I could have easily used that as the one about what can we learn from field experiments. I needed the randomization and time and price to learn about the value of time.

I've got one more question because this is something we think a lot about, time and money. Were you able to control it all for income levels in that experiment?

Oh, absolutely. So we have census block income data and you find that people who, of course, have more wealth or higher income levels have a much higher value of time. What's great about these data is exactly what economic theory would predict is exactly what we find, and that doesn't happen a lot. You've read enough of my studies to know this. Sometimes economic theory does well. Sometimes it doesn't do that well, and we need behavioral economics to kind of trim things up a little bit and learn about whether it's my epic loss aversion or another type of amendment. Sometimes we need to do that. In the value of time data, you don't need to do much trimming up. Neoclassical theory in the way we think about value of time does a pretty good job.

Unreal. That's a unique data set and a very, very interesting experiment that lines up with the theory, which is always nice. How important is replication in experimental studies?

Yup. The one part about being an academic that has been a little bit disappointing is that we have not placed as much weight on replication as we should. When I go back, I'm sort of a student of the history of both economics and the experimental method.

One of the very early pioneers of experimentation was a guy named Fisher, and Fisher did some work in the '20s and the '30s where he would randomize manure types across plots of land in the UK. Fisher deserves a lot of credit because he was one of the original pioneers in thinking about how you can use randomization to learn about a causal treatment effect. But he also talked about block design and replication.

So Fisher's tripod was randomization, block design and replication, and we have sort of taken the randomization and the block design a little bit. We've run with it as scientists, but we've done much less on the replication front. I think that has really been important in terms of how much we've been able to learn from the science and also how much trust public policy makers have in us. Because you and your audience have probably learned that a lot of results, especially within psychology, have been taken to task and a lot of results have been shown to be non-replicable. I've written a few papers myself on studies that are difficult to replicate.

Now what I've learned in all of this is one, we need more of it; two, we need to change the way the profession views replications and that we need to value them much, much more highly so people will do them. Three, I would say the guidance for all of you out there is a result will be more replicable and more likely to be replicated if it has at once a large sample size and twice a pretty large treatment effect.

So the ones that are reasonably well-replicated in terms of, is there a result here rather than a null result. The ones that seem to be sticking the most are the ones with the larger sample sizes and the ones who have large sample sizes and have large treatment effects. Now it's not that the large treatment effect has replicated itself, but the non-zero or the rejection of the null hypothesis is more likely to be replicated.

Is there a publication bias for positive results?

Oh, 100%. I think that's one of the big reasons why when you look at the literature, you have a bunch of results that are significant. They're significant at the 5% level because that's what we claim to have as a conventional level of significance as we control the false positive rate at 5%. For all of you out there, it's alpha 0.05. We tend to have people who put in the file drawer studies that don't have significant results. They don't even write those data up for public consumption.

So that's a huge issue because now when you read the literature, you get a very different glimpse of reality than the truth because of what's called the file drawer problem. People don't even waste their time to write it up because, well, why write it up when I can't get it in a good journal? I think we're moving in the right direction, but we're really not moving as fast as we should.

So let's shift to portfolios quickly, specifically your work, Can Myopic Loss Aversion Explain the Equity Premium Puzzle. So I'm wondering how often should people look at their investment portfolios?

As rarely as possible. So what we find in our research, and it was really taking some of the earlier work from Dick Thaler and colleagues, as well as Uri Gneezy and Jan Potters, and we took it to the field and we explored. First of all, our students and our professional traders, do they have what's called myopic loss aversion? Do they have both myopia and are they loss averse? When I say loss averse, I mean, you treat losses as much more important than comparable gains. So we took that to the traders and what we find is, yes, indeed, those traders as well as the students. Because a lot of times people say, "Well, that's a lab result with students. The real traders won't have it. I took that to task." What happens is the real traders have it and then some. They're myopically loss averse.

So what does that mean now for everyday investors like me and others, it means don't look at your portfolio. If you need to, I totally get it. But I would say once every three, six months is fine. But the reason why I don't want you to look at your portfolio is, because when you do and you see losses, even though they're paper losses. You say, "My gosh, that hurts." And you're more likely to move your portfolio out of risky assets and into less risky assets. And as we all know, just look at the data. The data over long periods of time, that's the equity premium puzzle, is that you get much higher returns, if you're willing to bear some of that risk. Now, if you look at the account a lot and you have myopic loss of version, you'll be much less likely to bear that risk. So, you'll move out and you'll be in inferior investments. The implication is don't look often, and as rarely as you can take a look, it's always good to take a look. But, don't punish yourself

In your data what effect does frequently checking have on performance?

Oh gosh. I would say half to a third of cost when people are moving. That's right. And now, we have extended that to professional traders as well, where we have some professionals, they see constant updating of prices and others only see it twice a day. And the twice a day people outperform the continuous people by pretty big margin. That bias ends up being real. And it's real for everyone.

Any advice for people, in terms of what they can do to reduce how often they look at their investments?

Yeah, gosh. I would say in that case, you invest money that you can afford to lose. You should never be investing money that you might need for a car payment or a down payment on a home or for rent. It's easier to ignore money when you don't need it tomorrow. So, I would say this is the kind of money that you can put away in a nest egg. And that's the kind of stuff you shouldn't be worried about and you shouldn't look at. In the end, that's probably only the money you should be investing in risky assets, anyway. You should be investing the stuff that you might need tomorrow. Just look at the meltdown right now in stock markets. It's been truly amazing. When you look at Uber and Lyft and Amazon and Walmart, I have exposure on all of these shares. Yeah. I think that's money that you're putting away in a nest.

Yeah. Makes sense. I've got a couple questions about another one of your experimental findings. Do men and women exhibit the same level of risk aversion.

Yeah, that's a good question. Now, there are a lot of differences between men and women. And one of the big ones has been that men are willing to take on riskier propositions than women. That has more or less been a tried and true result in the economics literature and in the psychology literature. Now, what I found early on when I looked at mutual fund investors, and this is a paper that was published in 1999. What we find in those data is that it's true, that men are willing to invest in riskier mutual funds than women. However, the big reason why is because, men had more education around investing and around mutual funds and risky assets than women. And after we controlled for that difference... So, if you want to think about running a regression and controlling for the education differences between men and women or the knowledge differences between men and women, once you take out that difference, then men and women are equally as risk loving or risk averse. But it was that variable that was running around, that was causing those differences in investment portfolios.

So you're saying there really isn't a difference, once you control for those variables?

In that particular data set that's true. Once you controlled for knowledge or education in investment... Now, the literature presents it as a more general result. And what I mean by more general result is like, when people buy lottery tickets or when people make decisions on whether to smoke. Or, whether to drive around a vehicle without a seatbelt. In all of those places, men take riskier actions than women. But, I'm telling you in our data on mutual fund investments, once I controlled for that, that difference went away. So the open question is, are there variables in those other settings, that once you control for those... What is it about the difference between men and women? Is it exposure to previous risk? Is it that the man was raised to be a frontiersman and the woman was raised in a different way, and it's the socialization that causes it.

We know it's something that causes it. Now, whether that's something can be a variable like education that we can help solve in the short run, or is it something like socialization, needs to change?

Let me give you an example. We went over to a matrilineal society in India and this matrilineal society was a society whereby women ran the household. So, we flew into Shillong India. We got in a cab and the cab took us out to the villages and we kept seeing the same sign, the same billboard. And we asked our cab driver, "What is that billboard saying?" The cab driver said, "Ah, it's the men again. They want equal rights. But, they're never going to get the equal rights." What he was talking about is all of the inheritance in these villages run through the youngest female. And the men were sick of sitting around and being babysitters in breeding bowls. That's literally what they would tell us. So, we ran them through experiments. And what we found was their women took risk on like our men in the Western world and their men took on risk like our women in the Western world. So, that socialization was extremely important when it came to risk, when it came to bargaining. And that was a very important feature of what people would do in these simple experimental settings. You got more than you could bargain for there didn't you?

No, that was awesome. That was awesome. We talk a lot about stuff like the value premium and the debate over whether it's risk based or behavioral. And it sounds like...

I've done a fair amount of work on modeling of asset prices in trying to explore what features are tradable. So, on the one hand, you have people who are efficient market folks. Who say that there's going to be very little that's tradable out there. And on the other hand, you have the behavioral types that basically say, everything's tradable. I've done a fair amount of work on exploring whether I'm just thinking about using from of French stuff and thinking about whether you can innovate and develop tradable strategies. That's a yes, yes and yes.

So, I've explored a fair amount of techniques on statistical arbitrage and whether there are some behavioral principles that one can follow to develop tradable strategies. And I would say that's yes. Have I published any of that? No. Because that was for a friend who runs the securities, let's say, trading firm, downtown Chicago. But, I think it is fair to say that you make money off other people's heirs.

And there's a fair amount of evidence on, from the literature, how you could unlock those types of trading strategies. So what happens in the lab in many cases, manifests itself in the market. But not always. There are some markets where the marginal trader who's setting prices and quantities is a rational one, and there it's very difficult. That looks like a phama type model. But then, there are other cases where the info- marginal types might weasel their way in to be in marginal traders. And they might have some behavioral principles. And that's called those Taylor types. I do believe there are some of those types in the short run that give you profitable training strategies.

Interesting. And would that be momentum and stuff like that, or is it something completely different?

That might be part of it. Part of it might be based on loss of version as well and linkages between firms. I think linkages between firms and how one firm's outcome is intricately linked... If you think about input, output tables to other's outcomes. I think that correlation is much higher than what anyone would have themselves believe.

Fascinating. What drives people to give to charity?

Oh gosh. Yeah. It's million dollar question. I think it's two part selfishness and one part doing the right thing and being an altruist. Here's what I mean by that. There's something called warm glow in the economics literature. And that was coined a long time ago by Gary Becker and also James Andreoni. What they were talking about essentially is that people give to a charitable cause because it makes themselves feel good. I've done charitable research now for 25 years. When I give talks to fundraisers, they always get angry at me, when I say that, two parts of this problem is people are just selfish. They're giving to make themselves feel good. I don't see anything wrong with that. As long as you know that as a charitable organization, you can leverage that to help raise more money, to do more good.

It's exactly what we did for the state of Alaska. Alaska has, what's called the pick, click and give campaign. Every year they give money to their residents that are basically oil revenues. And then, the resident can decide whatever they want to do with that money, they can do it. But, that's a point where the state asks, can you give back to your state to improve the natural resources, et cetera. So, we did an experiment on exactly what I'm talking about. On the one hand, we ask some people to give because it will make them feel good. And on the other hand, we ask people to give, because it's going to make the state a better place. Guess what? The first one does a lot better. The first one does a lot better because a lot of people give because they have warm glow. That's great.

Now the other third, when I say two parts, warm glow or selfishness, one part altruism. There are those altruists out there, and those are altruists who, they don't really give one rack or another on themselves feeling good. They see somebody else in need and they really want to help that person. Sure. They're getting satisfaction from helping another person. But in the end, they really want to help that person. And they want to do well by let's say, being a good citizen. And when they get to the pearly gates, they want to be judged appropriately.

And then, of course you have on the fringes things like tax breaks and prestige giving. I'm going to give to get my name on a building. All of that, those things are important. But for the modal giver, that's not really an important concern.

For the modal dollar, now, in many cases, that can be a concern. And the reason why is because 40% of the charitable gifts in the US come from 1% of the people. So, there's a very skewed type of giving. And you can go all the way up when you think about, "Okay, John, how far can you push what you just said?" Here's how far I can push it. The top 1,000 wage earners in the US give 13% of the charitable dollar gifts. That's a 1,000 people giving 13%. So there's a lot of money going to naming hospitals or naming operas or what have you and that's important. But, the modal giver is primarily giving for warm glow purposes.

So is the motivation to give the same for men and women?

Yeah, that's a great question. Now, what we've observed in the wealth of data that we have is that men are driven more by prices and prestige. So things like, if you give me a better tax break, men respond more to the tax break than women will, where women are driven a lot more by social pressure and by altruism as well. So social pressure in the situation plays a really big role in driving female gifts. Whereas males march to a different drummer and males will tend to be much more warm glow givers than females.

Wow. That's counterintuitive. I wouldn't have... At least in my mind, I wouldn't have guessed that would've been a result.

It's good to have field experiments, isn't it?

Yeah.

We don't have to guess about this stuff anymore. We actually have science about it.

Yeah. Science is good. Are there lessons in the work that you've done on charitable giving, on how individuals can optimize their own charitable giving?

Yeah. I think that the one key that people should be very clear about is, they should expect an ROI from their gift. And what I mean ROI is, I don't mean warm glow ROI. That's part of it. People give because they want that warm, fuzzy feeling inside of themselves. But I'm talking about, I want to give to a cause that can show me that my dollars matter. What I mean by that is you have a program that is tested and shown to have efficacy. Because, there are way too many programs and charitable organizations that they're trying hard and their heart is in the right spot, but they have no clue whether their program is doing any good at all. It's a hope and a wish for many of these programs. But, just like we can use science to figure out why do people give and what are the best ways to bring in more money? We can also use science to figure out, is our program working, is our program scalable. If I bring in more money, can I reach more people at a lower cost?

So, your extra dollars, not only leading to the good stuff, but even more of the good stuff. Because, I'm able to scale something. We can use science to explore which ideas scale, which don't. And, am I making an impact with your charitable dollars? We should demand that, as people who are giving to good causes. Because, in the end we want good to come from our money. So, let's make sure we scientifically test to make sure good is coming from all of these dollars that are going to charitable organizations.

We're going to get more into scaling ideas in a minute, but I want to follow up on that.

Sure.

How do you measure that? Like, if I want to go and make a charitable donation, how do I approach a charity and know whether they can do what you just said?

Yeah. Well, let's just take an example. Let's just randomly take an early childhood center. Let's say that you are interested in giving money to a center that gives free pre-K schooling to three, four and five year olds. Seems like a pretty good cause. So, don't you want to know if the student who goes into that program is going to do better in whatever it is? Kindergarten readiness, third grade test scores, that graduate high school more often, they go to college, whatever you want for your outcome. Don't you want to know that the dollars that you're giving to provide that service or that pre-K program actually leads to something good? Now, first of all, you need to define what is the outcome you care about? You might not know that right away. So, you should talk to the organization, the people who are doing the pre-K and say, "What outcomes are you looking at to make sure that your program is doing a good job for the kids?"

Most organizations will not know how to answer your question. They'll say, "Well, we think we're doing a good job because look, all the students are really happy and the families are happy." It's not good enough for your charitable dollars. You want outcome metrics that matter. Now, let me be clear. Before I get a bunch of emails and phone calls from the Make-a-Wish foundation. There are some organizations that in some cases it's difficult to measure the good that you're doing. In the States here, I've helped the Make-A-Wish foundation. What they do is, there's a child who's, who's gravely ill and might not have many more than two or three weeks left. So, they bring in maybe a sports star, maybe from Canada, maybe it's somebody like Wayne Gretzky comes in and talks to the boy for 20 minutes.

Now the critic will say, how can you measure that? Well, I want to measure that in smiles. I want to say how many extra smiles did Wayne Gretzky bring, versus the child next door, who didn't have a Make-A-Wish and didn't have those smiles. And then, we can start to talk. Because, let's say that the child has 50 more smiles. That might be worth it. And that might be worth my money when I give a $100 or $200 to the Make-A-Wish foundation, because I really value the end-of-life care and the end-of-life happiness for kids who are dying way too prematurely. So, I think that if you're creative and you're serious about measuring outcomes, anything can be measured. You just have to decide, is it important for you?

Very interesting. Let's shift to your book, The Voltage Effect.

All right.

What does it mean to scale?

Magic.

There you go.

So think about... Now we're talking Voltage Effect. Now, we're putting up the advertisement. Okay. Cameron

That's all right.

Scale to me is... It works in the Petri dish that worked for a certain set of people. But, will it work for other types of people and in other situations? It worked great in the initial experiment. If we try to scale that up, do we have a shot to have the same impact is what we observed in the small? So, when I think about the voltage effect itself, I think about, how does the impact change when we go from the small to the large.

Now, what I would call the first law of scaling is that, typically the impact goes way down when you scale something. And now, that's a problem. Because, if we have an idea with great promise and we end up trying to scale it up, and it leads to just a small fraction of the great stuff we thought was going to happen, we're wasting a lot of money. And there are example after example, after example, about cases where we try to scale. And it has a very dismal effect when we do scale it. When I say dismal, I mean only a fraction of what we thought would happen, ends up happening.

How important do you think it is... Our ability to scale as a society?

I think it's question number one. When I go around and give academic talks, I stand in front of a group of people and I talk about field experiments exactly as we have. And invariably, there'll be a smart person in the audience who raises their hand. And they say, "John, these field experiments look and sound great, but how come we haven't solved poverty? How come we haven't solved climate change? Why haven't we solved, why inner city schools fail?" Those are all complex topics. But, I think one part of the reason why is because we are asking and answering the wrong question, when it comes to science. What we do is we tend to do something in the Petri dish. That is an efficacy test. We give our idea its best shot. We do something with all the best inputs and we say, "Look, it worked." And we write it up for an academic journal.

But what we don't do is, we don't bring in the constraints and the warts that you're going to face when you scale it. And see if the idea works with those warts in the Petri dish. That's a question we should be asking and answering. Does my idea work when it faces the rigors of the real world? But that's what we never answer. We do an A, B test and B tends to be efficacy, or give my idea its best shot. We write up the academic paper and we forget to tell everyone else that it was an efficacy test. And we move on to our next idea. And then, somebody tries to scale that idea and it doesn't work. And then we wonder, "Well, why didn't it work?" It didn't work because the researcher was not answering the question that you need. You can only make big change at scale. You can't make big and profound change, if it's not at scale. That's a bottom line.

I have a follow-up to that. So, you were an economist in the white house back in 2002, I believe.

That's right.

Has the dramatic change in technology had an impact on society, or government's ability to scale to solve some of these big problems?

Yes. It's a great question and a great point. I think when you look at the changes that we've observed in the area of technology, it's made some ideas more scalable. So when you think about it, in the past, there might be a great idea that it just wasn't scalable because maybe you didn't have the human power. The man or woman power to pull it off. Say, you didn't have enough computer programmers to pull it off. Guess what? Now, AI or machine learning can substitute for some of that human power and make ideas that at once were not scalable, much more scalable now. I also think, the ability to generate new data and analyze mounds and mounds of data gives us a lot more information about ideas and programs that make it easier to choose which ideas we should scale. So, technology is always working in our direction.

In the future, there aren't going to be any excuses in a few years about scaling ideas that didn't work. Because in the past let's face it, scaling was art. It was simply art. Move fast and break things. Fake it till you make it. Throw spaghetti against the wall, whatever sticks you cook it. That's art. And that's exactly what governments would do as well, when they were trying to determine which ideas or policies should we scale. They were flying blind. There was no science. Books like mine are now saying, "Look, there's a science to scaling." You no longer have an excuse for choosing ideas out of gut feelings or things like business mantras that make no sense. Fake it until you make it. What the hell is that? That's complete art. That's nonsense. Art is great when it's behind you or hanging on the wall. But, art's not so great when it's dictating where billions of public dollars and private dollars are invested to scale. That's not a place for art. That's a place for science.

Yeah. Two great examples you talked about in your book had to do with, I forget if it was Lyft or Uber... But, the tipping example and the apologizing, that story of you going to your speech and ending up back at your house. Those are two great examples where the best solution was not the obvious solution, in both those cases. My question is this from a public policy/government standpoint, we're all walking around with these cell phones, capturing unbelievable amounts of data. Is there an opportunity for government within privacy confines and rules, to use some of that data to make great public policy decisions?

Oh, I think so. So, let's first of all, note that I agree with you on privacy. It's very important to get the privacy figured out. In my experiments I never can link a person's choice with their identity. In fact, I never want to even know the identities of people. What I want to know is for example, if you're taking an Uber from downtown Chicago out to the airport, O'Hare airport, what happens when I change prices by 10%? And that's something I can then look and see how people respond. But now let's backtrack and go to the government. So, I oftentimes hear people say that data is the world's most valuable resource and it's the new oil. That's what you oftentimes hear. I think that's wrong. I think that data only has value if you have a good data refiner, and the data refiner is the person or the machine that can take mounts and mounts of data, make heads or tails of it and lead to better decisions. Then when you can't see something causal in the data, generate new data to make better decisions. It's the data refiner that is valuable. Data's pretty cheap these days, data's very accessible. You're right, government can get data, but I can get data as well by buying it from people.

And data is ubiquitous, let's face it. And you're right, some firms and organizations have more access to data. You're right about that. Facebook and Google and Uber and Amazon and the government, they have more access for sure. But in the end, even they need a good data refiner, and without that, you're done. So to me, the government should be investing in better data refineries, and I'm talking about people here, who can use the data that they have for good. And that's really not what they're doing. Why not have a scaling unit, for example? Every idea that the government wants to scale should go through what I would call the scaling unit and go through the five vital signs that I talk about in the book and make sure that it's not a false positive, for example, and that the idea has a chance.

It's not that hard. So in that way, data... They're important, don't get me wrong. But the perfect compliment is the data refiner. You can have all the data you want, but if you don't know how to examine it and analyze it and make inference from it, you're dead. And that's the scarce resource here, let's be honest, is a person who does a really good job with data. That's why you're talking to me, because I can do a reasonable job at not only generating data, but also examining existing data. Because the examples that you just used, Uber tipping and Uber apologies that took both field experiments and looking at the mounts and mounts of data that Uber had on hand, and analyzing it in an informative way.

You implicitly answered this question, but I'm going to ask it still. What role do economic field experiments play in scaling big ideas?

I think multiple roles. I think first of all, field experiments can help us explore whether the idea works to begin with. Does your idea have voltage, does your program work? I can't think of a better way than to figure out, does this idea work than a field experiment? Then step two would be to say, "Who does it work for?" It worked for that group, but does it work for... If it worked for people in Ottawa, does it work for people over in British Columbia or in Toronto? For whom does it work for? Field experiments can give us that answer.

In what situations does it work best in and what situations does it have problems in? Again, this is exploring the features of the situation to see if the idea has more voltage in some than others. You can talk about spillovers. A big part of the book talks about every idea has some kind of spillover. How can you measure and test for spillovers? Sounds like a field experiment to me. So in the end, you can generate data. You can figure out the whys behind why is this working? And that really comes from field experiments and the experimental approach.

How do confirmation bias and herding get in the way of scale?

That's very good. In chapter one, I talk about false positives. False positives are simply, you ran a study and the data told you that there was some voltage there, there was a result. Really it's the data that are lying. So the data are simply lying to you. And if you took another draw of the data, the data would begin to tell the truth. So now things like confirmation bias, and let me define it for the listeners. It's, if you have a feeling that a program will work or an investment strategy will work, you think that I'm a momentum trader and then you end up going to do research on momentum trading. And every time you read a study that says momentum trading is good and here's why, you say, "Look, this is really good." And then every study that's critical of momentum trading you say, "That's not true. The data aren't right. The person's irrational."

Confirmation bias is, every time you get a new bit of information, you believe the stuff that confirms what you believe, but you assume the information or the new data that would be at odds with what you believe. So the minute you allow... We're all human, we all have confirmation bias. But the moment you allow that to creep in and play too big of a role, you're much more likely to scale an idea, a trading strategy, an investment strategy, whatever, a course in life that's wrong. That is simply a wrong path to take. And it was simply because your critical thinking skills were compromised and you fell prey to confirmation bias. Now, you also mentioned herding. I've done a fair amount of research on herding. And we brought it up earlier about work on investment.

What I found in the herding work... First of all, let's define what that means. Let's say there's a stock that you could buy, and you're not sure whether you want to buy it or not, but you see four or five other people claim that they bought it. And you think, wow, I wasn't sure if I should buy it, but they're smart. I'm going to jump on. And that's herding, because in some cases you're even ignoring your personal information to jump on the bandwagon or to join the herd, to get in on it. Partly you have FOMO, of course, but the other part is, I'm going to jump in with the herd because I want to make some money. So, herding ends up being a very important financial concept as well, because in many cases it's also called an information cascade. So when we have incomplete information about something like an investment, if we observe others doing something, that plays a lot bigger role in what we will do than it should.

Because we end up ignoring our private information and following the herd. It then ends up being a very detrimental strategy. So we should always take care not to have confirmation bias when we think about investments, we should make big care in thinking about, am I just following the herd here? And I have private information that I'm not taking advantage of or my own information? Do I really want to ignore my own information and to follow the herd here? Sometimes, yes. Don't get me wrong, it's not always wrong. But we show in our research that a false information cascade can form very easily once we see people observing what others are doing, and that ends up leading us down the wrong path of false positives.

Unreal. What do investors need to understand about the winners' curse?

The winners' curse is fun. The winners' curse is along the lines of... Look, let's say we're all in an auction and we're all bidding on something. Let's say it's a bid to drill on a government tract of land for oil. You probably have these in Canada. In fact, I know that you have people bidding auctions for softwood lumber. That's one thing I worked on in the White House, by the way, it was a softwood lumber trade dispute. And that was... You have bidders and various tracts of land bidding to have the right to cut down the softwood on various plots of land. So you can think about that or think about oil drilling, whatever. What you have now is you have an expert tell you how much oil do you think is in the ground.

And based on that reading of whatever it is that geologist told you that there's a set amount of oil, you take that information and you bid in the auction using that information. Now, the winner's curse is, of course, we all put our bids in, the person who wins is the person who bids the most. But unfortunately, what that also means is that there's information in the fact that you won. The information is, you probably had the most optimistic bid of everyone. And that probably means that you were overly optimistic. So the winners' curse is, you won the auction, but in the end of the day, you're going to lose money because you just got the most optimistic draw of geologists who over-emphasized the amount of oil that was on that tract of land. So the winners' curse is something that we should always take account of, is that when you buy an asset, the fact that you are the highest bidder or you value it the most.

If everyone else has their own information, you should always recognize that there's information in the fact that you won. And in that case, you can say, "Well, John, that sounds intuitive. How do I get around that?" You should always take your value and shade it down. So, I've done a fair amount of experimental work, and you should shade it down by the number of other bidders there are in the auction. So if you know there are only a few other bidders, you should shade it down a lot. And a zillion bidders, shade it down even more. Because the more bidders there are, there's more information if you won the auction. It means you got really optimistic. So the more bidders there are, the more shading you do-

Wow.

... in your bid.

So fascinating. What causes good ideas to fail at scale?

For the time being, let's put on the sidelines execution, because I think the second half of the book will talk about execution. The first half of the book is more about, are there signatures or traits of an idea that will invariably cause it to fail? And that's where I talk about the five vital signs in an idea. So anytime you have an idea or you think about investing in a company, and they have an idea or a service or a good. One way to think about it is, well, how big can this thing grow? The front half of the book talks about the five vital signs. Is it a false positive? What's the extent of market? What's the situational features that this idea will work in? Does it have good spillovers? And then lastly, does it have a good cost profile? So what's the supply side of scaling?

And the supply side, the government doesn't concern themselves that much with supply side, but in the business world, you have to. Because that's how you get bigger and bigger market share, is getting what's called economies of scale and economies. Once you have an idea that checks all five of those boxes, you have a chance. Now, if you have an idea that only checks maybe two or three of the boxes, you should step back and say, "Do I still want to scale this?" Maybe you do. Or do you want to say, "Those features where I'm deficient, how can I change my idea to make those deficient features actually good characteristics?"

So the book points out where you might have a failing idea, but that doesn't mean you don't scale it. Look at my dad and brother, one man, one truck, one good life. They're truckers and they drive their truck, and their secret sauce is they can talk to farmers or people at the green. But their comparative advantage isn't to have a fleet of trucks. They're not good at that. But they know one man, one truck. I'm not going to try to scale this, but I'm still going to have a good life. That's okay. The book is also for those types who want to scale to a one-man operation or a one-woman operation, or they want to think about ideas to invest in. The book speaks to both of those types of people.

How does loss aversion affect decisions on the way to scaling an idea?

Loss aversion is something that we should always recognize in ourselves. And we should always recognize that if you're a manager, loss aversion is a very important feature of people's preferences that you can leverage. To not only make that person better off who has loss aversion, but also to make your firm better off. So again, loss aversion is a tendency for people to weight more heavily losses than comparable gains. And our data sort of suggests that a $1 of a gain is about equivalent to a $1.56 in losses, so to speak. Losses are felt like 56% more.

So we can recognize that when setting up incentives for people, when thinking about our own selves, about how loss aversion or investment profiles, let's say. That's one reason why I say, "Don't look at your portfolio very often." Because if you have loss aversion, that's going to cause you to invest in less risky assets. And that's a bad thing, because risky assets go up and down a lot more than something stable. So when you look, you're much more likely to see a loss when you have a risky asset and that's going to cause you to disinvest in that or reallocate from that asset. And I don't want you to do that, because in the long run, you're better off in equities.

How does marginal thinking facilitate scale?

I would say that the most common mistake I see in organizations, whether it's government or a firm, is that they don't use marginal thinking. So let me tell you what I mean by marginal thinking. Because a lot of us hear economists say, "Think on the margin." If you have taken an eCom 101 class, you hear people say, "Economists think on the margin and you should too." So here's what I mean by thinking on the margin. Let's think about an example from Lyft. When I was a chief economist at Lyft, there was what was called a driver acquisition team. That driver acquisition team was in charge of bringing new drivers on the platform. At Uber, we also had a similar driver acquisition team. Here's how they would do it. They would place ads on either Facebook or Google to attract new drivers.

And what they were considering is, where should we spend our new dollars to attract more drivers? They looked at the data on Facebook ads, and here's what it said. It said, "For the last 1000 drivers, it cost us $300 each to attract those 1000 drivers using Facebook ads." Then they looked at similar data when we used Google ads, so ads that we placed on Google for new drivers. And there, they said, "The last 1000 drivers that we've hired, they actually cost $400 on Google." So they said, "We're going to spend the new money on Facebook ads, because 300's lower than 400. Makes sense. So I said, "Before we do that, tell me a little bit about the last 25 drivers that we've gotten from the platforms." They said, "Well, we don't have that." I said, "Go ahead and get it and send me an email tonight." So they did. They sent me an email and here's what it said, "The last 25 drivers we've acquired from Facebook ads, they cost a $1000 each, but the last 25 drivers we've acquired with Google ads only cost $500 each."

Come on.

Now what happened was, the marginal thinking led to a very different decision than the averages were leading to. So they said, "Professor List, We're going to spend the new money on Google ads and we wish we could go back in time and take some of that Facebook money and move it to Google, because we would've hired twice as many drivers just looking at the last 25." That's marginal thinking, it's taking thinner slices of data to make a decision based on where the data are going rather than big cuts, like you do in averages. A lot of times people say, "Big data, the more data, the better." It's not true. It's only true if those new data that you're adding are indicative of where the data are going. And if you're adding really old data onto something, that's not very helpful because the world changes. And I want to know what just happened yesterday, not what happened in 2016. So, that's what I mean by marginal thinking. And I talk a lot about the sunk-cost fallacy.

A lot of times people... If you invest in a company and you've lost a bunch of money and you say, "Well, I've invested a lot so far, I'm not quitting now." Whether it's an investment or a job, that's recognizing sunk-cost. If you're thinking on the margin, you ignore those sunk-costs and you think about going forward. You think, if I didn't have that stock in my portfolio now, would I buy it? If I won't, then why do you have that stock in your portfolio now? Transaction costs are close to zero these days. It used to be the case when I was younger, that we'd have to pay like $150 to make a trade for a hundred shares. And then it was like, well, the churn's going to kill me. That's not the case anymore. You can trade for free. And now if you can't look yourself in the face and say, "If I had new money, would I buy that stock?" You should ask yourself, why is it in there? And what am I doing? People don't think on the margin as often as they should.

We've asked that exact question of people many times. Are there other ways that individuals making personal finance decisions can apply marginal thinking?

I think a lot about it, is along the lines of quitting too, because marginal thinking also bleeds into, when should you pivot? When should you pivot from a certain investment, or when should you quit? So in chapter eight, I title that chapter that winners quit. And what I mean by that is... So, I was raised in Wisconsin in a blue collar family in Wisconsin. And I was taught that winners never quit and quitters never win. That's a Canadian way too, right?

Yeah.

You don't quit. You simply don't quit. If you typed in quitting and inspirational posters, you would have enough posters to cut down all of the wood that you have up in Canada. And you still wouldn't be able to produce every one of those posters. Society tells us that quitting is repugnant, full stop. So one reason why we don't quit or pivot enough is because society tells us it's repugnant. The other reason is because we ignore our opportunity cost of time. Let me unpack that, because that's a lot of economies. Here's what I mean by that. I ran a survey talking to people and asking people questions who had recently quit their jobs. And I said, "Why'd you quit your job?"

Reason number one, I lost my appreciation of the meaning of work. Makes sense. We all have a meaning of work and that's important to us. Number two, my boss didn't give me the promotion. Number three, my boss didn't give me the pay raise. Number four, I got crossed with a coworker, et cetera, down to 10, I didn't like my cubicle anymore. Every reason was my current lot in life got soiled. Something bad happened in my job and I wanted to move. That is just indicative that people ignore their opportunity set. We should be just as likely to move jobs when our opportunities get better. Look, I looked around. Remember before I told you, don't look at your investment portfolio very often, here I'm telling you, periodically look around and see what are the new jobs that have come up? What are the new apartments that have come up? What about moving to a different city?

This chapter unpacks the science behind why we don't quit enough, and a big reason is, we neglect our opportunity cost of time. We neglect. When I was a chief economist at Lyft, that meant I couldn't be the chief economist at Walmart. And when that opportunity presented itself, I said, "Look, my opportunity set has gotten so much better, I'm going to leave Lyft." And it was hard to leave Lyft. I left Lyft not because it got soiled, not because I got mad at the CEO. It was great working at Lyft. It was great working at Uber, but my opportunity set got better. So I took it. We don't think about opportunities in the right way. We ignore our opportunity cost of time, and we ignore that if we're in stock A that means we can't be in stock B. And if you think about it as an opportunity cost way or marginal way of thinking, that will change the way that you view life and you view investments and you view your retirement portfolio in very important ways.

This next question comes from my experience reading your book. You've had an incredible career and the book talks about the book ending from when you're at the White House to working in the, arguably, biggest dataset or some of the biggest datasets in technology at Lyft and Uber. So my question is this, why are big long- term issues like climate change... This is what came to mind reading the book. Why are they so hard to tackle?

I think two reasons. One, because they're so multidimensional that it's been hard for people to slice off pieces. I think the second reason is, it's become very political. And when you look at things like COVID... Look, we did a great job with the polio vaccinations and they scaled brilliantly. COVID vaccinations didn't because they became political very quickly. And once you become political and you become polarized, you have groups that will say, "I don't care at what price, I'm not going to do it." And the other reason is, I think with science, we failed. As scientists we failed to answer the question, for example, about poverty alleviation. We failed to give solutions that can work at scale, because we give solutions that work in the Petri dish. Those aren't going to change anything, because the features, populations of people and situations change so much when you scale that we've asked and answered the wrong question.

So those big issues, discrimination, public schools failing, climate change, poverty alleviation. Why don't we have more giving to charity? We need to break these up into pieces. We then need to solve the pieces about what we're going to face at scale, and those are the problems that we need to solve. Not the ones that I just want to create an efficacy test and that's a good test of theory. That's great. It's great to test theory, but if you want to produce a result that will change the world at scale, you have to take a different mindset and you have to ask the question in a different way. And I think that's part of the reason why we don't have viable solutions. But of course, as I mentioned, it's multidimensional and we failed on a lot of those dimensions.

Would you lump saving for retirement as one of those big issues to solve?

Yes and no. And here's why. Retirement's similar to climate change in this particular way, you have to absorb a cost now and you receive benefits in the future. These types of questions, like why does a 16-year-old drop out of school? The 16-year-old drops out of school because the cost is right now of going to school and the benefits aren't for 10, 15, 25 years. That's the climate change problem, that's the retirement problem. That's a human error in logic and thinking that we all have, is we don't wait the future enough in part because we say, "I'll worry about the future when I get there." And the part with the climate change problem, we say, "It's somebody else's problem. It's not mine." Now, with retirement and schooling, the reason why I say that climate is an altogether different animal is because you, yourself, can solve your retirement problem by overcoming this discounting problem that I mentioned. If you overcome the discounting problem in climate change and no one else does, it's not solving anything because the marginal feature is very different for you as an individual versus the climate issue. Now where they are the same, I think is beyond the mindset. It's also that science can be used to alleviate a lot of these problems. And look, we've done a fair amount of research on ways to nudge people to save more and ways to nudge firms to help people save more. And that savings problem is a lot easier to solve than the climate change problem because of free riding and other economic issues that are associated with public and public goods problems.

Interesting. Can incentives help or are nudges just repackaging of incentives?

I think nudges are one kind of incentive. So, let's stick with the climate change issue. And let's say that we're worried about households contribution to climate change. It's a big issue. So, here's what we found in terms of nudges and prices to try to tackle that question. So, we went door to door trying to get households to adopt CFLs. So, CFLs are good and we're trying to get households to adopt them. What we found is to get the household to adopt the very first CFL, you can use social nudges and they're very effective. So, what do I mean by that? You say, "Look, you live in this neighborhood. 70% of households in this neighborhood have CFLs in their light sockets. You should too." That's very effective. That's a kind of social nudge. Now, if I want that household to adopt the second and third and fourth CFL bulb, that's where prices are helpful.

So, in this way, nudges and non-financial incentives work to get people to adopt the very first time but to get deeper adoptions, I need prices or subsidies. And in that way, nudges, which you can call behavioral economics, are really good compliments to neoclassical economics or how do prices affect adoption? So, I think that's a problem that I'm trying to tackle with climate change there. And any problem, I think, you'll need both. I think to have the solution in the most cost effective way, you're going to end up needing both financial and non-financial incentives.

Cameron, you look skeptical.

No, not at all. I'm soaking this in.

Does it make sense?

Yeah. It makes complete sense to me too.

Good. Yeah. That's a paper we have that's titled, "How many economists does it take to change a light bulb?" If any your listers want to hear it or read about it.

That's a good title. What do you think has to happen for science to play a bigger role in policy?

It's a really good question. I think in the US, we took a hit in the last presidency and we took a hit because we were criticized for being out of touch and maybe the misinformation campaign. If scientists said something different than what the president wanted, he would say it's fake news and it's fake science. And when you start out in that direction, you have many people in the electorate becoming more skeptical about science. Now, I think where we came back a little bit was that we relied on science to generate great vaccines. And the scientific approach was what everyone was leaning on because we said, "We're in a predicament here. We need science to help us." Now, I think what came out from that is that the hard sciences in science in the area of medicine is in a really good place and the social science side, such as getting people to wear masks and nudging, et cetera, is not in such a great place because we did not have society accept or agree on, "What are the general approaches that we should be taking as social scientists to pull our weight and help move society to a better place?" So, I think in a way it showed me the fractures that we have, not only in replication, but you had an idea that worked in Louisville. What's the science to scale that thing? And we really don't have a good response to that. So, I think in a way to develop more trust and to make sure that we're an important player on the bench, the scientific bench, is that we handle the issues that were clearly shortcomings in terms of what we know and what we need to know. And in particular, I think there's a big role to be played on the science of scaling and that policy makers will know, this idea works really well for these kinds of people and in these kinds of situations and it doesn't work very well in other kinds of situations. So, we need a new solution for those.

And where exactly can we scale this to? Anytime you're dealing with individuals, there is always a case of heterogeneity and there's always a case that the one size fits all will never work across individuals in different situations. In many ways, that's how we're different than when you do experiments with non-humans or let's say some medicines that naturally work with all human bodies. Think about the polio vaccination. Salk tried it out on his kids. It worked. He then tried to make sure it wasn't a false positive. So, he tried it out on other people's kids. It worked. And then he found out that it worked for all kids and in all situations. That was great. So, then what they had to solve is, "How are we going to get into people's arms?" And they solved it by making it part of the healthcare system.

When you have a child and the child's born, they get vaccinations. You bring your child back in six months, they get vaccinations. A year of vaccinations and it's fluid. And it's costless in a way because you're going to bring your child back for the checkup anyway. So, then you're getting something kind of for free. And that aspect is we need to do a better job figuring out those features with COVID and other governmental programs. And anytime you deal with humans, it's harder. There aren't laws like we have in physics. Our best laws like the law of demand, which is obvious, or the law of comparative advantage or the scarcity laws, et cetera. But these are like, "Well, sometimes they don't work." It's not like a physical constant or a quantitative law. And anytime you deal with humans, that's what we have. And that means we need more science to figure out what are the boundary conditions to that law? And in those boundary conditions, what do we need to do to help people?

Fascinating. So, here's a question on your work around critical thinking skill formation. We are all parents. What can parents do to promote critical thinking for our children?

Yes, I'm glad you brought up that paper. So, that was a recent paper that I wrote on critical thinking and critical thinking really runs through The Voltage Effect as well. And critical thinking has been what we've all done for the last hour and a half. That's what the three of us have been doing, which is we've been talking through problems, using our critical thinking skills. Critical thinking, to me, is the most valuable resource we have as humans. And it's the resource that transfers across all boundaries. I just got back from an international book tour and my critical thinking skills worked in Germany, they worked in Copenhagen, they worked in Oslo and they worked in Helsinki. They work everywhere. Critical thinking is a world's ultimate renewable resource. We should take great care to make sure that when we teach our own kids or our own kids go to school, that they're getting critical thinking skills and we're developing those critical thinking skills. Now, your question is, "That's great, John. How in the world do we do it?" I, first of all, say, we need to define what it is. What is critical thinking? And to me, it has two parts. It's being able to think in an abstract way to think through problems and put cognitive biases like confirmation bias and egocentric bias on sidelines. The other type of thinking is to use data or what I call the concrete or the empirical part of critical thinking skills, which is, how do we use data to update our beliefs? How do we use data to make informed decisions? Now, as humans, we like to think fast and take shortcuts. That's just our way of doing things. To teach your child and at school, we need to get people to slow down. People don't like to slow down because thinking critically is costly.

It's not easy to be a good critical thinker. You need to slow down and think through, "Is this relationship causal? What are the data that underlie that, that is indeed a causal relationship? Can I put myself in the shoes of somebody else?" A very important exercise amongst kids is teaching them theory of mind. How able are kids to put themselves in the shoes of others and see the problem from a different viewpoint? Many adults have a problem doing that? So, in that paper, I talk about ways that we can teach our children and our students to develop slow thinking skills. Because once you do that, then it becomes a habit. So, I say, step back and ask the question. Where's the evidence coming from? Does whomever created the evidence, do they have an incentive to create evidence in their favor or is the evidence truly objective?

So, you walk through these sets of questions. And if we get that in the back of our mind, there's a habit... We've developed pretty good critical thinking skills. So, I develop a critical thinking hierarchy that goes all away from what I call the modal thinkers. Roughly 70% of thinkers are rule of thumb thinkers. They argue by example and they do a bunch of shortcuts and they have heuristics. They hear President Trump saying something and they think it's the truth because that sounds right. And they don't engage in what I call being an adept thinker or a great thinker because it's hard and society doesn't reward critical thinking skills at the lower levels as much as they should. The critical thinking skills become more and more important in white collar jobs and in jobs that demand causal reasoning but many jobs, many blue collar jobs don't demand those kinds of skills. But to be a good voter and a good decision maker, those are important skills to have.

So, I think where we shortchange our students, especially in public schools, is we don't teach critical thinking skills. We teach them to memorize and regurgitate facts that are old news and you could get from Google anyway. The way that we teach our kids is 19th century public education because we haven't taken the time to transform our educational system whereby we understand you don't need to memorize and regurgitate facts anymore. That's not the nature of the game. It helps but it certainly isn't the nature that brings in huge human capital rewards. That's just not how it is.

Yeah. I thought that was a really neat paper.

Thank you.

We've talked about a lot of stuff. Critical thinking for kids to asset pricing. Yeah. Lots of stuff.

What more do you want? We've gone from the sublime to all other places.

Right. You've published an overwhelming amount of research. And I can say that as someone who recently tried to consume as much of it as possible to try and figure out what questions to ask you. We've also talked about the opportunity cost of time. How do you decide which ideas are worth pursuing in your research?

That's a really good question. So, you can really answer this in stages because as a young academic, the main currency is to create academic papers that are published in important academic journals. And you tend to be driven by the market for academic work. After you get tenure... So, tenure has a lot of criticisms and a lot of critiques that are fair to make against tenure, but the one really, really nice feature of having tenure is you can start developing a different research strategy. And I really have done that. And my strategy really is about what idea has a best shot to make the world a better place? And if the idea has a good shot and I have five ideas that all have a good shot, which one will make the biggest impact? And that's where I'm going to spend most of my time.

So, when you think about The Voltage Effect, you might ask, "Why did you write this book?" It's not to make money because you don't make any money writing books like this. There are a lot easier ways to make money. The reason why I wrote the book is because I had been doing academic papers on scaling for about five years. And I had written dozens of academic papers on the science of scaling but I was getting a feeling that only four or five people were reading each paper. And I was like, "I'm not changing the world with this. What's going on?" But I think it's important.

So, I ended up taking stock of everything we've learned in the academic literature and taking out all of the economies, taking out all the math, take out all the jargon and write a book that everyone can understand on the science of scaling ideas. Because I thought that has a good shot of changing the way we think about, not only public policies, but about private ideas and about our own life course. And that's why I did this project. It's a time consuming project but do you know what's even more time consuming, Benjamin? It's getting people to read it. The hardest part about this whole process. It wasn't writing it. It wasn't writing it at all. It was convincing people to read it. That's the hardest part about this whole process.

Well, for anyone listening, I can say that it is worth reading.

Thank you.

I enjoyed reading it. I want to ask a question about Walmart and I don't know if you can answer or not.

Sure. We'll give it a go.

What are you working on? You're the Chief Economist of Walmart. That's crazy. What are you doing?

I like that. So, I am 24 days in. I started 24 days ago and it looks like I'm going to be working a little bit on Walmart Plus and the idea about how to use loyalty and rewards programs. And the role of economics and field experiments in customer retention is important. That's a landmark project, I think, for any firm is the retention question. I'll be working on pricing. I think pricing obviously is a very important component and I think I'm going to be working on a fair number of HR issues. So, anytime you have a firm that has 2.3 million workers and it basically touches every feature of our lives. When you think about America alone, 90% of people live within 10 miles of a Walmart. So, that's a comparative advantage that Walmart has. And when you think about last mile delivery and you think about ways that Walmart can move into, not only the delivery, but also the internet business and how that can combine with the in-store experience.

I think these end up being really interesting scientific issues. And I'm talking about working all the way from the very top strategic types of approach all the way down to what happens when we promote the wrong person? And promotion is important and productivity and mental health. These are important issues. We should take care in measuring them and exploring, what are the best ways to set up the workplace so people have good work outcomes themselves. They're mentally healthy. Let's say they're satisfied in the work and it's partly pay of course, but it's also partly non-pay incentives that are important and meaning of work and making sure that, mental health wise, everyone is in a good way. I want to work on all of these projects and what Walmart gives the shot at is working on them at scale.

I was going to say, talk about scale.

Think about it when I did the work with Virgin Atlantic Airways. So, we showed them how to save millions of gallons of gasoline and we showed them how you can save a ton of carbon emissions. And it was basically on helping pilots in three different areas. It was one, how much fuel to put in your plane. Pilots notoriously put way too much fuel on their planes. Now, a lot of people might be saying, "John, don't talk to my pilot. I want to make sure there's enough fuel." I'm telling you, there's more than enough and then they're topping it off. That burns more fuel because of the weight. Another thing that they're not very good at is the flight plan itself. And if you have a better flight plan, you can save a ton of fuel.

And the third is after you land, you should always cut an engine. You only need one engine when you're taxiing. Very rarely do they do that. So, we set up behavioral nudges for them and it worked. And you can read the paper and we saved not only millions of dollars but we saved millions of carbon emissions. All of Virgin Atlantic now does that but think about if I can start to do that with Walmart and their drivers in their supply chain. And their delivery folks. Now we're in bigger business. So, that's what I was talking about before. It gives you an opportunity for a power play, or whatever you want to call it in your language, that if you do have something, and I think my team will have something, that once you implement that change, it can blow that up. And to me, high voltage, because the idea is reaching many more people in many more situations. That's high voltage.

Fascinating. Is it safe to assume that Walmart is going to let you publish your findings?

Yes.

Awesome.

Like every firm that I've worked with, there will be a portion that is in-house consulting and there will be a portion that is external consulting where we want to help the rest of the world. Look, I do science because I think we have good solutions through science and I want to, not only help the firm, but help the world. So, I wouldn't have gone there had it not been in agreement that there will be a healthy dosage of what we've talked about, the value of time. That's a great public good now that we know that. We just talked about Virgin Atlantic or apologies or tipping or the gender pay gap. These are all important issues that we need to grapple with as a society.

Our final question, and you may have already partially answered this, but how do you define success in your life, John?

If I reach my production frontier every day, that's a success. So, what do I mean by that? A lot of times people measure their success on outputs and people raise their kids to be output driven kids. "Did you get the best test score? Did you get the 95?" To me, it's about inputs and the inputs are, "Did you try as hard as you can every day when you were working?" And that's an input because that's what you can control. If you're entering a sporting event, are you as prepared as possible? When you write that academic paper, did you put all of your inputs into it? Did you produce the paper in the best and most reasonable and understandable way that you can? If I can look at myself at night in the mirror and say that, that's a win.

A lot of people say, "You have 250 academic publications," and, "You've done this." Those are outputs. That's great and I get that we have to measure outputs but for me, if I am not at my frontier of helping to change the world every day, that's a failure. And I failed myself and I failed my colleagues and friends and society because I'm not at my frontier. So, for me, it's about inputs. And sometimes I win and sometimes I don't. So, success to me is, did I get the most out of myself in that day?

Wow. That was a fantastic answer and this has been a great time together. And certainly you made the world better place with this conversation. So, John, thanks so much for joining us.

Look, guys, thanks so much for having me and I really appreciate all the time you spent on my work and thanks for giving me this platform.

Thanks, John.


Participate in our Community Discussion about this Episode:

https://community.rationalreminder.ca/t/episode-204-john-a-list-improving-the-world-with-economics-discussion-thread/17349

Book From Today’s Episode:

The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scalehttps://amzn.to/3xgBhOE

Links From Today’s Episode:

Rational Reminder on iTunes — https://itunes.apple.com/ca/podcast/the-rational-reminder-podcast/id1426530582.
Rational Reminder Website — https://rationalreminder.ca/ 

Shop Merch — https://shop.rationalreminder.ca/

Join the Community — https://community.rationalreminder.ca/

Follow us on Twitter — https://twitter.com/RationalRemind

Follow us on Instagram — @rationalreminder

Benjamin on Twitter — https://twitter.com/benjaminwfelix

Cameron on Twitter — https://twitter.com/CameronPassmore

John List on Twitter — https://twitter.com/Econ_4_Everyone

'The Nature and Extent of Discrimination in the Market' — https://www.jstor.org/stable/25098677

'How Many Economists Does It Take to Change a Light Bulb' —  https://www.researchgate.net/publication/228431296_How_Many_Economists_does_it_take_to_Change_a_Light_Bulb_A_Natural_Field_Experiment_on_Technology_Adoption

22 in 22 Reading Challenge — Join the Rational Reminder’s 22 in 22 reading challenge!

Ben’s Reading Code (22 in 22 Challenge): 7XWESMK

Cameron’s Reading Code (22 in 22 Challenge): N62IPTX