Episode 361: Alex Edmans - Finding "The Truth" in Economics, Finance, and Life

Alex Edmans is Professor of Finance at London Business School. Alex has a PhD from MIT as a Fulbright Scholar, and was previously a tenured professor at Wharton and an investment banker at Morgan Stanley. Alex has spoken at the World Economic Forum in Davos, testified in the UK Parliament, and given the TED talk “What to Trust in a Post-Truth World” and the TEDx talk “The Social Responsibility of Business” with a combined 2.4 million views.

Alex’s book, “Grow the Pie: How Great Companies Deliver Both Purpose and Profit”, was featured in the Financial Times list of Business Books of the Year for 2020, and he is a co-author of “Principles of Corporate Finance” (with Brealey, Myers, and Allen) for the 14th edition to be published in April 2022. He was named Professor of the Year by Poets & Quants in 2021.


Today, Ben plays lone host for the first time as we welcome Alex Edmans to the show. Alex is a Professor of Finance at London Business School as well as an accomplished speaker, author, investment banker, and financial advisor. To start, Alex describes his involvement in the formation of a new law in the UK before defining ‘misinformation’ and where confirmation bias fits in. Then, we assess the impact severity of confirmation bias, biased search versus biased interpretation, the role of generative AI in confirmation bias, and the levels of susceptibility within confirmation bias. We also explore the role of black-and-white thinking in concealing the truth, Alex’s Ladder of Misinference as seen in May Contain Lies, the 10,000-hour rule and other famous statements of misinformation, and how the idea of a narrative may influence how people interpret and misinterpret facts. We end with how to guard against the plague of data mining in research, data as evidence and what this implies for evidence in financial economics, and Alex shares helpful advice for determining truth in any circumstance. 

Key Points From This Episode:

(0:03:27) Alex Edmans walks us through the erroneous evidence that influenced a new UK law.

(0:07:13) Misinformation; living in a post-truth world; and where confirmation bias fits in.

(0:12:06) The severity of confirmation bias, and biased search versus biased interpretation.

(0:18:19) Unpacking generative AI and the susceptibility thresholds of confirmation bias. 

(0:21:25) How black-and-white thinking makes the truth more elusive.

(0:25:40) Understanding Alex’s Ladder of Misinference as seen in May Contain Lies.

(0:28:17) Debunking the 10,000-hour rule and other enduring statements of misinformation.

(0:38:10) The second step on the Ladder of Misinference: Why facts are not data.

(0:42:42) How the idea of a narrative influences how people interpret or misinterpret facts.

(0:44:25) Why data is not evidence, and examining the plague of data mining in research.

(0:48:36) Guarding against data mining and the consequences of investing with misinformation. 

(0:53:01) When data is evidence, and what this says about evidence in financial economics.

(0:55:49) Why evidence may not be proof.

(0:59:14) Practical advice for seeking the truth for important decisions and in everyday life.


Read The Transcript:

Ben Felix: This is the Rational Reminder Podcast, a weekly reality check on sensible investing and financial decision-making from one Canadian. We are hosted by me, Benjamin Felix, Chief Investment Officer at PWL Capital. All right, this is my first time doing an intro and hosting a guest by myself. Cameron's done a couple like that, but this is my first time.

I had a great conversation with Alex Edmans, who is his second time on this podcast. He is a Professor of Finance at London Business School. He's got a PhD from MIT as a Fulbright Scholar. He's previously a tenured professor at Wharton and an investment banker at Morgan Stanley. He's done a lot of great work on what he calls Growing the Pie, related to social responsibility. He's done a couple of extremely popular TED talks. His previous book, Grow the Pie, is what we talked to him about last time he was on the podcast, and his latest book, May Contain Lies: How stories, statistics and studies exploit our biases — and what we can do about it, which is published in 2024, is what we discussed in this episode. It's a great book, and I think we had a great conversation covering the topics.

It's thinking about how finding the truth today, because there's so much information available to people, is increasingly difficult, and Alex talks through this concept that he's come up with called the Ladder of Misinference, where he goes through different types of information and how they can be misrepresented or misinterpreted in a way that makes the truth very difficult to determine. Alex talks through the different rungs on that ladder of misinference, and what people can do to look for the truth. That's it. That's what we talked about. There are a couple of really interesting examples of very well-known statistics, like the 10,000-hour rule and the importance of sleep, that turn out to be when you dig into the actual sources of those statistics, maybe less true than I think a lot of people think.

In some cases, the data have been misrepresented. In other cases, the data to support the claim just weren't there to begin with. We talked through different versions of that type of concept going through increasing levels of strength of evidence, and so from a statement to a fact, and from a fact to data, and from data to evidence and evidence to proof, and why each rung below the one above it is not the thing above it. For example, a statement is not a fact, and Alex explains why. A fact is not data, and so on and so forth. That's the premise of the book, and that's the premise of the conversation.

I think it's really important. It's basically how to think like a scientist and how to combat confirmation bias, which is very problematic. People are very eager to find evidence, and there's some research that Alex talks about, about what happens in our brain when we find confirming evidence. People really want that, which is problematic. He also talks about black and white thinking. It can make things seem very compelling when you say this is true, and this is false, or things are this way and not that way, when in reality, the world is quite grey. I think that's enough for an introduction, so we'll go ahead to our conversation with Alex Edmans.

Ben Felix: Alex Edmans, welcome back to the Rational Reminder Podcast.

Alex Edmans: Thanks, Ben. It’s great to be back.

Ben Felix: Super excited to be talking to you. Your new book is excellent. I really enjoyed reading it.

Alex Edmans: Thanks. Well, after you spent two years into writing something, I always appreciate it when people like what came out.

Ben Felix: Really important stuff, as listeners are about to find out. To set the tone, can you tell us the story about the erroneous evidence that you watched influence the creation of a law in the UK?

Alex Edmans: Certainly. In 2016, there was a UK inquiry into corporate governance, which is the way that companies are run. Being a professor who studies corporate governance, I submitted some written evidence in response. They invited me to give all evidence in front of the UK House of Commons in front of a select committee. I was grilled with some questions. But before my session, there was a session beforehand, which I sat in on just to make sure I knew what was going on. The witness before me mentioned some research which sounded quite noteworthy. She said that there was evidence which suggested, the lower the gap between the CEO's pay and the worker pay, the better the company's performance.

This was really interesting to me, because some of my own research is about the merits of treating employees fairly, so I thought this would be another bow to my arrow, so I might just tweet it from the rooftops and share it with other people. I wanted to find out this research, so I could share it. I looked up the study, because it was in the witness's written submission, and I saw completely the opposite. It said, the lower the pay gap, the worse the performance, rather than the better. I thought, well, yeah, I am pretty nervous before my own session, but clearly, I should know how to read, and it was there as clear as day.

I realized what happened is the witness had quoted a half-finished version of the study. I was looking at the finished version. The finished version showed, the lower the pay gap, the worse the performance. But the witness quoted the half-finished version, because it gave what it wanted to be true. The witness was from the trade union with a strong public position against pay gaps, and so it wanted to quote evidence, finding that pay gaps are bad for performance.

Ben Felix: That's wild. That made its way through into the law, though? How did the rest of the process work?

Alex Edmans: What happens is that there's lots of witnesses who give evidence at the select committee. You would not necessarily make law based on one particular witness statement. Indeed, after finding this out, I reported this to the clerk of the select committee. He said, “Oh, this is not good. You should write some more evidence in. We will publish it.” And they did. Yet, they ignored my rebuttal. It's not so much a rebuttal. It was a clarification. They still went ahead by quoting the original evidence as if it was gospel.

This was really alarming to me, because, okay, you think, yes, there's misinformation. Maybe somebody on Breitbart News or an Instagram influencer will spread misinformation, but the UK House of Common Select Committee for something that might eventually become law was quoting something they knew to be wrong. Sometimes we're misinformed because we can't look at every single information out there. But I had informed them. I had sent my clarification. They had published it. Yet, this was something they didn't want to be true. Why? Because the UK at the time, there was quite a lot of outrage against high CEO pay. That was the reason for having this inquiry to begin with. They just latched onto evidence they knew was incorrect. Eventually, it now became law that every company in the UK above a certain threshold has to publish pay ratios.

Now, the point here is not about pay ratios being good or bad. Different people might have different opinions. But just how careful we need to be about evidence and even evidence from a supposedly trusted source, like a UK government report may well not be gospel.

Ben Felix: That's pretty scary stuff. What does it mean to say that we live in a post-truth world?

Alex Edmans: It is that there's lots of misinformation out there. You might think, well, this is obvious. I knew this before listening to this podcast, but what I want to highlight is misinformation is a bigger problem than what we think. We think that misinformation might be an outright lie. If somebody claims that Barack Obama is not a natural born US citizen, we can falsify that by showing his passport. Sometimes something could be truthful, but it could still be misleading.

Indeed, it was true that one version of that paper had found that low pay gaps are linked to better performance. They indeed quoted that half-finished version of the paper. The witness cannot be prosecuted for spreading misinformation, because what they said was strictly true, that earlier version did find that result, but they had forgotten or they completely failed to recognize that there was a later version, a finished version after going through peer review and correcting its mistakes, which found the opposite. We can be misled even by truth. Close truth does not mean just out what lies. It might be things that are truthful, but are misleading, because they've been superseded.

Ben Felix: It shows up in that story that you just told. Can you talk more generally about the role that confirmation bias plays in making it difficult for people to find the truth?

Alex Edmans: You might think, well, misinformation, that just happens to uninform people. I am a smart listener to this podcast, I'm rational and therefore, I won't fall for this misinformation. What I'm highlighting is that many people, including me, make these mistakes. In the book, I go through examples where I myself made mistakes. It's not that we're bad people, it's that we're people. And if we're people, we have biases. One of the biases, as you mentioned, Ben, is confirmation bias. That's the idea that we have a view of the world. If we see something that supports that viewpoint, we lap it up on critically.

Maybe the trade union had a view that high pay gaps are bad, low pay gaps are good. As soon as they saw a paper that supports that, which confirmed their viewpoint, they lapped it up. They didn't even bother to look at whether that paper was updated. Even though it was 2016, they looked at a 2010 version. The finished version that I looked at was in 2013, so it'd been out for three years, but there was no need to look for it if you already have the answer that you want.

Ben Felix: What should people do to combat confirmation bias when they're making a decision themselves?

Alex Edmans: I think it's like any addiction, which is to recognise that you have the addiction to begin with. The first step towards alcohol recovery is to know that you have a problem. Now, the listeners might think, well, that's a pretty strong statement to compare bias to an addiction. I don't think it's strong, because it has the same roots, and those roots are psychological. We are actually addicted to seeing things that confirm what we like to be true. There have been neurological studies, which show that when you see something that confirms your viewpoint, it releases dopamine in your body. It feels as good as finishing a long run, or having a laugh with friends. Yet, when we see something we don't like, it triggers the amygdala. That is the part of the brain that induces a fight or flight response. It's the part that's triggered when we're attacked by a tiger.

Just to recognise that we naturally have oppressions towards listening to things that we will like and shutting out things that we don't, then we will need to actually combat this by going out and reading things that we won’t agree with. This might be reading articles from a newspaper with a different political viewpoint. It may well be in a meeting, encouraging people to disagree with me, giving time and space for people to share dissenting opinions. Any active action that we can take, having recognized that we will naturally gravitate only towards confirming viewpoints is a way of getting more informed.

Ben Felix: You have a toolkit at the end of the book. One of the things that you wrote there that stuck with me was, to ask yourself if you want the thing that you're looking at to be true, as a filter to see whether you may be affected by confirmation bias.

Alex Edmans: I think that's important, because I'm trying to lead the reader to smarter thinking, but I need to recognize that the reader doesn't have time to check every single footnote in every single paper. Where can we be selective and apply the 80-20 principle? If there is a study, or a claim on something, that's number one important. Number two, that we really want to be true, then we need to be particularly careful. Why? Because number one, we might not scrutinize it. Number two, other people haven't scrutinized it, while we might have heard of that study is it's been round at echo chamber, other people have liked it and shared it without necessarily checking it, just because they liked the punchline.

Ben Felix: Empirically, how severe are the effects of confirmation bias?

Alex Edmans: They're very severe. What studies have done is, in addition to looking at how this affects the brain, as the neurological basis, is they looked at how it affects outcomes. Perhaps, one really serious outcome is false conviction. There were studies of the most serious investigative failures. What the study looked at is what were the various causes of this? Maybe the DNA evidence was incorrect, but the clear winner by some margin was confirmation bias.

What does this mean in a criminal investigation setting? You might have a hunch about who is guilty. Maybe that somebody with a particular appearance. Sadly, it can be somebody with a particular ethnicity, or their demographic characteristics, or they may have visible markings, like tattoos, which you use as a stereotype. Then every piece of evidence you interpret as consistent with that suspect. What I've highlighted in the book in all settings, not just criminal settings, but with evidence, is that there's multiple explanations for a finding. If I find more sustainable companies perform better, a sustainability advocate like me will claim that sustainability causes better performance. But alternative explanations is that better performance could cause sustainability.

Similarly, in a criminal investigation, it may well be that there's a forensic evidence that the killer was a 5’10” male, so enough forensic evidence. Eye witness evidence that the killer was a 5’10” white male, and that fits in your suspect. But it can also fit other people who were in the vicinity that had motive, means an opportunity, but you don't explore those alternative suspects, because you're just now only framing on your particular culprit.

Ben Felix: Something that I've definitely noticed is that on divisive issues, people will generally agree on the facts, but they may disagree severely on the way to interpret the facts.

Alex Edmans: Absolutely, there’s very selective interpretation. If there is a study which finds that sustainability doesn’t pay off, then on LinkedIn, there's no shortage of comments as to this is one study out of many, or maybe this was just in a particular time period. Yet, when there is a study finding sustainability does pay off, then people will just shout this from the rooftops, without considering that this is just one study. We know how to be discerning and critical, but we are doing this very selectively. We criticize things that we don't like. What I'm highlighting is we need to apply the same discernment to things that we do like.

Why I think this is quite realistic for even busy people to do is what this highlights is that the skills for discernment are already within us. The solution to this information is for everybody to get a PhD in statistics. That's clearly unrealistic. I don't even have one myself. If instead, we already know not to overweight one single study, we know that correlation is not causation, we know that a cherry-picked anecdote may not be representative, apply those same checks when we see something that we do like.

Ben Felix: How impactful is biased search, as opposed to biased interpretation?

Alex Edmans: What we've talked about so far, Ben, is one aspect of confirmation bias, which is I call bias interpretation. Once we get information, we interpret it in a biased manner. If it's something we like, we immediately will accept it. If it's something we don't like, we will either not read it, or if we read it, we read it trying to pull it apart. There's a second form of confirmation bias, which is, I think, just as important. It's biased search. Do we bother going out and looking for different viewpoints?

This even affects things like Google searches. If I want an excuse to drink loads of red wine this evening, I could search for “why red wine is giving you health.” I'm sure I find a ton of studies showing that red wine leads to longevity. Now, how I looked at the opposite, how I do, why red wine is bad for your health, that I could have found some opposite studies. But because I framed the question in a particular way, I'm only going to get a particular answer. This applies much more widely than just Google searches. It applies to what newspapers we might subscribe to, or read. It applies to who we follow on social media, X or LinkedIn. We often will be in echo chambers. We're just looking for information that confirms our viewpoint.

Sometimes biased search is even more serious, and that even if we don't explicitly say something, we will receive selective information. What do I mean by this? Let's say, you're a boss at work. Often, people think, I will want to placate the boss. I will want to give information that he or she will agree with. Even if I say nothing to suggest that I'm biased, most people assume that you only want to see something which is confirming. This is why, again, we need to be intentional as a boss. Say, I would like you to challenge me and show me why I'm wrong. That is important to ensure that you see the other viewpoint.

This actually happened to me yesterday. Yesterday I was teaching in class and I always encouraged students to challenge me. There was a question I asked and the student gave an answer. I gave a counterpoint, a different viewpoint to that student. She actually hadn't taken my core class, so she hadn't known that I like to have debate and different viewpoints. Then she came up to me in the break and was really worried. She said, “Oh, I sent you an email just apologizing for having a different viewpoint from you and challenging you. I really would like your reference to a PhD program. I'd like to study a PhD.”

I said, “There's absolutely nothing wrong with challenging me. Actually, if you want to do a PhD, I would like somebody with an independent thought.” But because she is from a culture, wherever professor is somebody who's infallible, who's an authority, it was actually not natural to her to know that she is able to challenge me. That is something that perhaps I should have recognized myself, rather than assuming that everybody knows that I'm open to a challenge, maybe I should have made that more explicit at the start, particularly for people who hadn't taken my core class.

Ben Felix: Something that I've noticed and I've no data to support this, but just my intuition is that the Google search example has become even worse with generative AI, with ChatGPT.

Alex Edmans: This is actually fair. People always think, well, isn't AI the solution to this? Because AI is unbiased. It's going to look at for everything out there. I have used AI myself and I've used it just to do research. I haven't deliberately tried to skew it one way. When I asked, what is the evidence that diversity improves performance? Why? Because I'm doing a study on diversity myself, it comes up with some very flimsy studies by consultancies suggesting the evidence is unambiguous.

Then when I write back to it, I say, none of these studies were published in top scientific journals. Can you give me the scientific research? It finds completely the opposite and it admits it made a mistake. Have I not sent that follow-up challenge, had I been somebody who wants to believe that diversity pays off, I would have accepted the first thing uncritically. Even something like generative AI, whether it's programmed to make you like it and want to continue interacting with it, and if that is something as part of the algorithm to keep you there, then it will give you stuff that you want to be true.

Ben Felix: How does someone's level of knowledge affect their susceptibility to confirmation bias?

Alex Edmans: What you would hope, or you might think is the more knowledgeable you are, the less susceptible you are to confirmation bias, because you're able to think more rationally, you know the difference between correlation and causation. The issue here is not the lack of knowledge, but it's the inability, or willingness to use your knowledge because of your biases. For that reason, actually, knowledge doesn't help, because the reason why we're not interpreting information correctly is not through lack of knowledge, but our biases. In fact, actually, knowledge can make things worse. Why? If we see something we don't want to be true, the more knowledgeable people can come up with some reason to dismiss this.

One very salient example was Deep Water Horizon. This was the explosion of one of BP's leased rigs in the Gulf of Mexico. What happened was you need to do a test known as a negative pressure test to check that the oil rig is safe before removing it. They did the test three times and this always failed. They came up with an explanation called the bladder effect, which is pretty technical, I describe it more in the book, as a reason for why this test was given the wrong result. Because in their eyes, the rig was safe, so they should remove it. They came up with an excuse to do a different test, that test passed, and then the explosion happened.

Afterwards, there was an official investigation into Deep Water Horizon, which found that this bladder effect was a complete fiction. Nobody in their right mind would have come up with that as an excuse for the failed test. If you're really smart, then you might convince yourself by coming up with something convoluted.

Ben Felix: That's a crazy example. What role does black and white thinking play in making the truth elusive?

Alex Edmans: So far, we focused on one bias, which is confirmation bias. That applies when we have a pre-existing view. We latch onto stuff that confirms that view and reject stuff that contradicts it. What happens if we don't have a pre-existing view? There might be certain topics that we're neutral on. This is indeed where black and white thinking comes into play, which means that even if we don't have a pre-existing view, we have a tendency to see things in black and white terms. Something could either be always good, or always bad. We have a binary opinion.

One example might be food. With protein, most people, do you have it in you? Protein sounds good. It builds muscle. With fats, most people think fat makes you fat. That's why it's called fat. With carbs, most people may well be neutral. The Atkins diet, it played on black and white thinking by suggesting carbs are always bad in all measures and all types of carbs, not just complex carbs, but simple carbs as well, and so on. Why was that such a successful diet, despite extremely weak scientific evidence? It played on black and white thinking. It was really easy to follow. You don't need to track your calories and make sure that carbs are no greater than 30% of your daily calories. You just said, let me avoid carbs. This was all carbs. It wasn't just refined sugar. It was complex carbs as well.

Notice how Atkins had completely the opposite diet, which is eat as many carbs as possible. He may well have been just as successful, because that's just as easy to follow, and it plays into black and white thinking. We see the same idea with some superfoods right now. What this means is to pan a best seller, Atkins did not need to be right. He just needed to be extreme.

Ben Felix: The way you tell that story in the book is very well told, but it's also very scary. What are the problems that can arise from black and white thinking?

Alex Edmans: I think it means that we just ignore the nuances and shades of grey. We think that something is always good, or always bad. When even if something is good, then it could still be good only up to a point. One quite salient example is some people might die because of black and white thinking. When you run a marathon, people tell you, you need to hydrate little and often, but drink as often as possible. There have been marathon runners who've died from water intoxication, where they drink so much that essential minerals are diluted to fatal levels. Even though water is good, it's not always the case that more is always better.

The same is true with management practices. You might think, oh, let's give employees more autonomy, which seems to make sense. We want to delegate. But too much autonomy might mean that they're not coordinating their efforts. They don't have direction. Certainly, providing employees feedback is said to be a good thing. But too much feedback too often will lead to short-termism, rather than focusing on long-term goals.

Ben Felix: Basically, there's a lot of grey in the world, which makes black and white thinking pretty challenging.

Alex Edmans: People might think, “Oh, yes. I knew that there were shades of grey. I didn't need to read the book to do this.” When I talk about shades of grey, shades of grey may come in many different forms. One form that we just talked about is the idea that more might not always be better, or less might not always be worse, and so on. Another type of shades of grey is even if something is good in general, not everything within that umbrella may well be good.

Let's say, if sustainability is generally good for financial performance, not everything under the sustainable bucket may be good. Let's say, employee welfare is good for performance, which makes sense, because at least some motivated workers. But having a Catholic values screen might not be positively correlated with financial performance. But if we view the world in black and white terms, then anything which is called sustainable, we think is good. Similarly, anything which is called cholesterol might be bad, when actually, there is good cholesterol and there's bad cholesterol. But another form of black and white thinking, we see the world in labels and we treat everything under that label the same.

Ben Felix: In the book, you set everything up by going through confirmation bias, black and white thinking, and you get the Ladder of Misinference, which is so cool. Can you describe what the Ladder of Misinference is?

Alex Edmans: Why did I come up with this to begin with? Before I wrote my book, I read other books about misinformation, and they were good. I'm not going to attack them, because I view them as rival books, but they were so good and they were so comprehensive that they came up with the 263 ways in which we can be misinformed. Then it's difficult for a reader to remember all 263 and to put them into practice. I wanted to boil down all the types of misinformation into just four categories, so that it's easy to put into practice. Those four I display in a graphic call the Ladder of Misinference.

Why a ladder? Well, when we start from some statements and we draw some conclusions from them, we are climbing a ladder. But why I call it the Ladder of Misinference, which is obviously a play on the Ladder of Inference, is that sometimes the steps that we make when we draw conclusions from raw data are incorrect. Let me just talk through for now. The first step is a statement is not fact, it may not be accurate. What I mean with this is we quote statements all the time as if they were the gospel truth. Culture eats strategy for breakfast. Peter Drucker said this, so it needs to be true, because he was a management guru. But it may not be the gospel truth. Indeed, in that case, there's two issues at least.

The first is that Peter Drucker never said that, and so this is a misattributed quote. Second and more seriously, even if he had said that, it doesn't mean that it's true, because where is the evidence that culture always beats strategy? Yes, he is a famous management guru, but that doesn't mean that everything that comes out of his mouth is correct. There's a lot of famous people, I'm sure the listener can think about who says things which are not correct. What we'd actually want to look at is, is there evidence behind that statement, has there been a careful study of the link between culture and strategy performance? Even if there isn't hard evidence, this doesn't mean that what he says is definitely incorrect, but rather then, Peter Drucker states, proves that culture eats strategy for breakfast. We will say, well, in his subjective opinion, culture is more important than strategy. We will still put some weight on that, because he's a guru, but this doesn't mean that strategy is irrelevant, which is how some people have interpreted that black and white statement.

Ben Felix: Some of the examples you have of statements that are just not true, but have persisted for a long time are mind-blowing. There's actually one, that I don't think was in your book that I was reading about recently about how lottery winners tend to go bankrupt, or a large proportion of them. That's another one of these that there's no basis for that. Somebody said it once and it was attributed to a study that was never done, but now people believe it to be true. You've got a bunch of examples like this in the book. Can you talk about where the famous 10,000-hour rule for mastery comes from?

Alex Edmans: This was from a bestselling book by Malcolm Gladwell called Outliers. The popular version of the rule is 10,000 hours is the key to success. You can be successful at anything that you want to, as long as you invest 10,000 hours into studying, or practicing it. Immediately, this plays on confirmation advice. We want this to be true. We think it's true, because we're taught from a young age that practice makes perfect. You can do anything that you set your mind to. This is empowering. This suggests that the world is your oyster. You don't need to be limited by your genetics, or your past background.

I had to admit in the book that I myself had taught this to my students at Wharton. Why? I'm a finance professor. What's this got to do in finance? Well, at the end of my final lecture, because I start teaching right at the start of the MBA, then my final half now, at the end of my final class is looking forward for the rest of the two years, I'll give some advice, and one of them with the 10,000 hours rule, which is that to do an MBA, you can push yourself outside your comfort zone. You can redefine yourself. Maybe historically, you always worked on quantitative factors, but now you can become a leader and master negotiator and public speaker as long as you put your time in. When I said that, I would get a lot of nods and people would say, “Oh, that was an inspirational, empowering speech.”

Then later on, I found that the evidence for this was really, really weak. Gladwell claims that this rule applies in every context. Yet, the evidence he cites is limited to just violent playing. What works in violent playing may well not work in public speaking, or neurosurgery. But even worse, that study never measured 10,000 hours. It never measured success, which is a bit of a problem if you think that 10,000 hours is the secret to success. Well, first, how did it not measure hours? All it did is it asked people to report the hours that they had practiced since age five. Now, these violinists were currently 23-years-old. They had to remember from age five, 18 years ago, how much you practiced. You might well not remember how much you practiced last week.

Here is the problem. If you were successful right now, you might well say, “Yeah, I probably did practice a lot to get here. I had to work hard.” You don't want to say, “Oh, I was just born with it and was lucky.” Conversely, if you're not successful, you won't say, “I practiced a lot when I was young,” because then you'd have to admit that this was all a waste of time. Plus, leads to higher reported hours. Rather than hours leading to success, these are just self-report.

Another issue is it never measures success, not in any objective manner. It just asked the teachers at this violin academy who has the greatest potential to in the future play at a good orchestra. Here, it may well be that let's say, the teachers knew how much the students practiced. Probably, if you know a student practices a lot, you will probably say, “Yeah. I think he or she will be successful.” If somebody's goofing off and not showing up, you'll probably say, that they will be unsuccessful.

For something which claims to study the link between practice and success, it doesn't measure actual practice, but memories of practice 18 years ago, it doesn't match actual success, but a subjective evaluation.

Ben Felix: That's wild. That statistic is so commonly cited. There's another one that was pretty striking. There's a book that a lot of people have read, Why We Sleep. Can you talk about the example from that book?

Alex Edmans: Yeah. This book, Why We Sleep by Dr. Matthew Walker, because he stresses on the front cover his scientific credentials, argues that sleep is the secret to success. We're in a national, or global sleep epidemic. People are not sleeping enough. This confirms what we want to be true. We want an excuse to sleep more, because it's going to lead to greater physical health, greater mental alertness. We would like to believe that our over eager beaver colleagues that wake up at 5am, they will get their comeuppance. What might their comeuppance be, well, maybe if they're not fully alert, they might just have accidents.

Indeed, there is a graph which shows the more you sleep, the fewer the accidents. If you sleep for nine hours, you have fewer than eight, that's fewer than seven, fewer than six. Yet, if you go to the underlying paper behind this, he actually shows a quite different graph that has another bar for five hours of sleep. Those people who sleep for five hours have fewer injuries than six. They also have fewer injuries than seven. But because Matthew Walker doesn't like that, he cuts this out of the graph. He just shows something which is really selective.

You think that this is a book founded on lots of data and evidence. Again, it's by this Dr. Matthew Walker who goes out of the way to stress his credentials. Yet, there's a lot of errors in this, and this is only one of many errors in the book. There's an entire blog post by Alexey Guzey on why the book, Why We Sleep, is riddle with scientific errors. Again, even if something is said to be based on data and science, you could just be quoting selectively going so far as to cut out a bar of a graph, because it doesn't support your viewpoint. That is just like a police officer hiding some evidence, because it exonerates his suspect.

Ben Felix: Yeah, that one's crazy, because you show the chart from Why We Sleep and then you show the chart from the paper. In the Why We Sleep, it shows this monotonic relationship, which is completely destroyed by the actual chart from the paper, which is, like you said, it's like hiding evidence. How is something that egregious not flagged by an editor, or a fact checker?

Alex Edmans: It's because people just may not want it to be true. If you are an editor, you might think, this is a guy with strong scientific credentials. Another thing is Matthew Walker misrepresents his scientific credentials. He said he had a PhD from the prestigious Medical Research Council of the UK. There's no such counsel which ever gives any degree. This gives you grants. It doesn't give you a PhD. He's got his PhD, I think, from the University of Newcastle. That's an okay university. It's not Oxford, or Cambridge, or London, so on. That is also misrepresented.

People might think, okay, this is a guy who's got lot of credentials. He's quoted a paper. If you quote something, you don't have the time to go through and look up every evidence, you might trust him. I think what is even more surprising, or more disappointing is even after all of these issues have been raised on this very long article, critiquing his book, nobody's really changed their opinion, because they say, okay, well, despite all of that, it's my gut feel, which thinks that sleep is still important, or my lived personal experience suggests the more I sleep, the more alert that I'm going to be.

Again, this is mixing up correlation and causation. Why are you able to sleep a lot? It's probably that lots of other things in your life are going, “Well, I'm sleeping more, then I'm probably going to be happier.” But because lack of stress means I'm going to sleep more in the periods in which I'm not sleeping, it's because I'm having to juggle loads of other things. Right now, I just had another child a month ago and maybe that is going to cause a lot of stress and a lot of busyness, not necessarily the lack of sleep.

Similarly, this might happen with political outcomes. In the Brexit referendum, whether UK voted to leave the European Union, there was an incorrect claim posted on the side of the bus that the European Union costs the UK 350 million pounds per year. That was a sum which was completely exaggerated. When that was corrected, people said, “Oh. Well, that never influenced my viewpoint. Anyway, I would have voted the same way.” You would say that, because you don't want to admit that you were misled by something.

Once you have a particular opinion, even after there is some disconfirming evidence, people are not changing their view. This is the often quoted phrase that a lie has gone halfway around the world before the truth has laced up his or her shoes, is once something has gone viral, people stick to it, even if there's disconfirming evidence.

Ben Felix: Sleep's a tough one. I do like to sleep. How should people verify whether a statement is actually based on a fact?

Alex Edmans: One thing is actually just to try to search for this. Just try to search for disconfirming evidence. If you were to look at why we sleep errors, or you might look at this in any type of popular book, 10,000 rule critique, then you might find this. While I've talked about the dangers of getting information from the Internet, because there might be less trusted sources, on the other hand, what it does mean is that we do have access to a wide source of information. Even if somebody who might not be a massive household name, like Alexey Guzey, even if he's a serious researcher and has done his homework, this is something that it will come out with, to try to find something which is disconfirming.

Sometimes it may well be as simple as just going to the underlying article. Now, I don't expect people to go through every single word of a scientific paper on sleep, but just to look at that picture. You can just see how that picture is very different than the picture that was ended up being quoted.

Ben Felix: Those are examples of how a statement may not actually be based on a fact, once it's been verified. How is a fact not data?

Alex Edmans: This is the second step of the Ladder of Misinference. You might think, okay, what Alex has said is that statement is not fact. Let's check the facts. Now, if we know that something is a fact, are we home and dry? Well, the answer is no, because even if something is a 100% true fact, it could be cherry-picked. It might be the exception that does not prove the rule. One example is Simon Sinek's book, Start with Why, which claims that having a why is the secret to success. He gives a number of facts to back this up. For example, Apple starting with why, and apple is extremely successful. There's no denying that. He also gives other facts, other anecdotes. For example, it applies in the non-profit sector, Wikipedia, extremely successful, and applies to individuals. The Wright brothers, they were the first to have a test powered flight.

All of these things can be cherry-picked. There can well be hundreds of other companies, or non-profits, or individuals that also started with why and they failed, but we never hear about them, because Simon Sinek will never tell you about this. This is the difference between facts and data. Facts are individual stories, data is something which is large scale. The issue here is not just authors like Simon Sinek. The issue could be business school professors. I'm saying this as much to myself as I am to the reader is that in business schools, we like to teach with case studies. How do you choose a case study? You choose the most vivid and striking example of the particular point that you want to show, that may well be a company which was sustainable and a pioneer in that and had massive performance. Whereas, there might be others, which are sustainable companies who have failed.

I do use stories and I do use case studies in my teaching and in my books. When I do use them, I always try to make sure that it's backed up by large scale evidence. For me, I start with the large-scale evidence and I find a story which brings that evidence to light. Then sequentially, yes, I lead with the story first and then go to the evidence to back it up and meet the afterwards. Often, what people will do will just find a story which supports that viewpoint. Find out the examples that support that viewpoint as well, but they won't do a systematic study of the large-scale evidence.

Ben Felix: What are the steps involved with separating fact from data?

Alex Edmans: What you'd want is something similar to a randomized control trial in medicine. What happens there? You would take a number of patients and they're all suffering from some affliction. You give some of them a treatment, a drug, or behavioural intervention. See how many get better and how many get worse, and give others a placebo and see how many get better and how many get worse, and you compare the two.

Importantly, your data set needs to contain people who took the drug and still got worse. That's companies that started with why and yet, they fail, but they will never be in any Simon Sinek book. Similarly, you need companies that did not, sorry, you need people who took the placebo and yet, they still got better. That's like companies that did not start with why and yet, they still succeeded. Again, you don't see this in any Simon Sinek book. He will only show you the companies that started with why and succeeded. Even if those examples are 100% fact, it's the what he does not tell you, which is where all the action lies. All of these other companies that started with why and failed, and all of these companies that succeeded without a why.

Why is that so striking? It suggests that misinformation is really hard to police. Some people might think, why is it not the government stepping in and prosecuting it? To one of your earlier questions, Ben, why don't editors come out and check this? The issue is not what you publish and what you disclose being incorrect. Maybe what you say is correct, but it's what you're hiding, which is where all the action lies, and it's really difficult to prosecute Simon Sinek for not telling you about all of the counter examples. You could claim he just never knew of those counter examples. Perhaps, because of engagement bias search.

Ben Felix: The Simon Sinek example touches on this. But what role do narratives play in how people interpret, or misinterpret fact?

Alex Edmans: It's substantial. This is the idea of the narrative fallacy, which is even if there is no relationship between two factors, if you can weave in a compelling story, that story is memorable, and then we perceive a link when there is no link. Again, this is deeply ingrained with us, because even before we had writing, it would be the oral tradition people passed an information through stories. The narrative that Simon Sinek weaves in is, well, if you have a why, people will buy into this. It inspires people. People buy Apple products, not because of what they do, but because of why they do it. They buy it, because Apple has this vision and this purpose.

Now, if you were just to look at that statement with a clear head, that sounds crazy. That's a load of rubbish. Why do you buy Apple products? Because of their functionality. Because they work, because they have the apps, because of the reviews, because other people will have iPhones. They work. I don't really know why Apple did this, nor did I care. I don't think this is unique for me. I think most people will buy products because of their reviews. Other people have them, their connectivity, their functionality. But the idea that it's a why, it's a vision, that sounds more compelling, that might say as a leader, I don't need to get involved in the boring stuff, making sure that every, all the why is correct and everything worse. I can just create a vision. That's something that many people think is much more compelling.

Ben Felix: Yeah, makes sense. Stories are very powerful. As we move up the Ladder of Misinference, we start with a statement and we go up to a fact and then we go up to data. Once we have data, why is data not evidence?

Alex Edmans: If the solution is not to look at a select sample, then why don't we get data, which is the full sum, the full picture, the companies that start with why and the companies that did not? Well, the issue here is that you may have a strong correlation, but it's not causation. There could be multiple explanations for the finding. Let's say in general, what we find is that companies with a purpose statement, a why, do tend to be successful. We've done this. We found this link looking at hundreds, or thousands of companies. But is it that having a why leads to success, or is it once you're successful, you have the time and headspace to come up with a purpose statement, because you're not firefighting, or is it a third factor causes both, a great CEO causes good performance, a great CEO comes up with a purpose statement, but there's no direct link between the two.

Why do I call this data is not evidence? It may not be conclusive, because what is the difference between data and evidence? We often hear these terms interchangeably, but what is evidence? Now, we hear evidence in the context of a criminal trial. Evidence is evidence that points to one particular suspect. If the evidence suggests that Tom, or Dick, or Harry could have killed Sarah, that is not evidence. It's consistent with multiple explanations. What we would want to move from data to evidence is not just a correlation, which has multiple interpretations, but something which may be causation, a way of ruling out. Is it success that leads to a why? Or is it good management that leads to both? Again, if we have confirmation bias with a side of a nice narrative, then we latch onto the causal exhalation, which is the why leads to success.

Ben Felix: How big of a problem is data mining in research, generally speaking?

Alex Edmans: It's a huge problem. Data mining, this is a different issue from correlation versus causation. This is the problem that there might not even be a correlation to begin with. Why? Because you've fished for it, you've mined for it. What do I mean by this? Again, let me give an example. Let's say, I want to link sustainability and financial performance. Well, there's so many ways I could measure sustainability. Maybe I could look at carb footprint, I could look at water usage, I could look at biodiversity impact, I could look at how you treat your employees, I could look at gender diversity, racial diversity, the list is endless.

In terms of financial performance, I could look at sales, growth, I could look at profitability, I could look at total shareholder return. I could do this over five years, or 10 years, or one year. Even if I came up with a robust correlation, that's sustainable companies before Meta, you need to ask yourself, well, could there have been many, many other ways of measuring sustainability and performance that the researcher did, and they only report the one that gave them the findings that you want. Then how does a reader discern this? Because you have no idea about all the other tests that they ran. It's to ask yourself, do you have the most natural measure of performance and sustainability?

For example, McKinsey has a ton of reports which links sustainability, or other things like diversity to EBITDA, which is earnings before interest, tax, depreciation and amortization. That is one measure of performance. Those common measure of performance, do we just look at how the stock price starts? Why is that much better? Because profitability only measures the short-term. Yet, the stock prices, forward-looking, some of the best performing companies nowadays are tech firms, which don't have high current profits. But because they're expected to have higher future profits, the stock prices are already encapsulating that.

Ben Felix: Can you share the example of an investor that had come to you wanting to build a fund around some performance measure, and you did the research and it wasn't there, but they went and did the fund anyway?

Alex Edmans: Yes. This is one of the world's leading investors, who I give a student in the book. She had heard of my research linking employee satisfaction to long-term firm performance and she wanted to launch a fund based on diversity. I was really excited. I thought, okay, I could work with this leading investor. I myself and an ethnic minority, so I would like to believe that diversity pays off. There were 24 measures of gender diversity, which was her specific lens, which I tried to correlate with performance. I did and I found in 22 of those 24 cases, there was sadly a negative link.

Now, it was clear what I should do if I wanted to work with this investor, I could just disclose the two positive ones, but that would not be correct in terms of honesty, so I disclosed all 24 and she wasn't interested in the work. She politely thanked me, but then moved on. I thought, well, this is going to be the end of it. Then just six months later, I saw her launch the fund with a big fanfare that this is backed up by scientific research, showing a strong link. I was surprised, because I had done the research 24 times and not found it. What she had found was research by another company, linking diversity not to shareholder returns, which is what any investor should care about, but measures the return on sales, or return on capital employed. It had a very specific measure of diversity, a compared board with three or more women, two boards with zero women. Why is it three or more versus zero? Why is it not three or more versus two or fewer? Two or more versus one or fewer? Or why isn't it something percentage-wise, at least one-third women and two-thirds men, or something? It may well be that this three versus zero was the only thing which works, and so that's why they reported this.

This matters, because this fund and many other funds with the same theme, they have all underperformed. If you're basing this on misleading data, it doesn't matter how much of an advocate you are for the cause, and it may well be seen to be a worthy cause, but do not back up this cause with some incorrect data. Just as a police officer, you might have a high conviction that this person is guilty, and it may well be that the court has just unfairly made certain rulings that certain evidence is inadmissible. Despite how strongly you feel that it's not just, it is not an excuse to then make up data, or make up evidence to convict your suspect.

Ben Felix: Seems like a really tricky thing in financial economics. As a practitioner, people used to say that evidence-based investing was a good thing to imply things like index funds. I found that phrase to become increasingly meaningless, because you can find evidence, or a form of evidence, at least, to back up pretty much any investment strategy that you want to make a case for.

Alex Edmans: We like to throw around these phrases, like research finds that, study proves that, but you can find a study to show whatever you want to support. What matters is the quality of the research and also, not just to be skewed by one study, but to look at scientific consensus, which is the findings of multiple studies.

Ben Felix: What can people do to guard against data mining?

Alex Edmans: As a consumer of research is to ask, well, have you looked at the most plausible ways of measuring an input and an output? Let's say, the output is performance. The most natural measure for any fund is total shell return. That's the measure of performance they always advertise. If you look at return on sales, that is quite odd.

Also, the input, if you're looking at three or more versus zero, that's strange, because what about the board one or two women? Why are we looking at the absolute number of women on the board? Shouldn't it be a percentage? Because if a board is larger, you might expect more women than if the board was small. Just to ask yourself, are there more plausible ways of measuring this? If there were and they're not reported, then that might raise a red flag. What the best studies will do? Will do a robustness check, which is to show, well, if we were to measure performance in this other diff way, we still get the same results. Every measured diversity in this different way, we looked at percentages, non-absolute, we still find this.

Ben Felix: Moving up the ladder, when is data evidence?

Alex Edmans: That is when you have something like a randomized control trial. In medicine, I have randomly assigned some people to get the drug, and some people to get a placebo. Now, when real life outside of randomized trials, you might think, well, that's pretty rare, but sometimes you do get the equivalent, which is known as a natural experiment. What do I mean by a natural experiment? Well, a lab experiment is what I have just described placebo and a treatment, and natural experiment is when this separation applies naturally, perhaps, because of a law change.

There was a time where, I hope I'm getting the states the right way round, where Pennsylvania passed a minimum wage, but New Jersey, which is just across the board, did not pass a minimum wage. It's like for New Jersey, restaurants had the placebo, nothing happened. The Pennsylvania restaurants were the ones with the treatment, with the drug. You can compare the effects of the minimum wage on employment. The study I cite in the book finds surprisingly the minimum wage of anything increased employment compared to the common perception that minimum wages are always damaging to employment. Why? You compare two very select restaurants with similar economic conditions, because they're just very close to each other, just one is in Pennsylvania, the others in New Jersey, and then one was affected and the other was not.

Ben Felix: What does this say about evidence in financial economics, where natural experiments and RCTs, well, RCTs, I don't think you can do, but natural experiments are pretty rare as well?

Alex Edmans: It's actually quite difficult to get evidence which is conclusively causal. Does this mean that, well, we can't believe anything and anything goes, no, but it's just to say that we don't want to rely on things too heavily. Absolutely, gold fundage should not be causal 100% evidence. Otherwise, we could just ignore many other things, which might be 80% true, or 70% reliable. Instead, it's not to do things as black and white. It's not either completely nailed and completely invalid, even if what you find is a correlation that could still be useful. Some of my own papers only find correlations, because I don't have a natural experiment.

One example is my work on cognitive diversity. Now, we do have natural experiments on demographic diversity, which is forcing, say, boards to have 40% of women. There is no natural experiment forcing there to be a diversity of different viewpoints. That's much harder to pass the law about. All I have is correlations, but those correlations at least can be suggestive, they might get a 70% of the way there, even if it's not completely inclusive.

Ben Felix: When we have evidence, why is evidence not proof?

Alex Edmans: Let's say, we've nailed causality, we've got a perfect natural experiment. That might only be true in that particular setting. It may not be universal. What is the difference between evidence and proof? A proof is universal. When Archimedes proved the area of a circle, that was true not only in ancient Greece in the 3rd century BC when he did his proof, it's true in circles in 2025 in Canada today. Often, when we look at evidence, it might be gathered only in one particular setting.

Let me give you an example. There was a very famous book by Angela Duckworth and a really famous TED Talk about the power of grit, the power of passion and perseverance. What she did to show that grit mattered, while she took men and women who got to West Point, which is the United States military academy. If you get into West Point, you're not yet in the army, you have to survive a six-week course called the Beast Barracks, which is really tough. It's mentally and physically demanding. She wanted to see what predicts survival of Beast Barracks. You might think it's physical fitness, but she found the one thing even more important than fitness was grit, which she defines as passion and perseverance. Then it's striking. Grit must be so powerful if it's even more important than fitness in a physical challenge. But the formula here is something known as restriction of range. What is that?

She had a sample of already very fit people. In order to get into West Point, you already have to be extremely fit. Maybe they had already passed the threshold level of fitness. That's why fitness didn't matter, because everybody was already fit enough to survive Beast Barracks. Now it's something else, like grit matter. We cannot over extrapolate from that setting of really fit people and to say, well, for the general person on the street, work on your fitness, work on your grit, not your fitness, no for the ordinary person dreaming of joining the military. Maybe he or she should work on their fitness. That's the more important thing.

Ben Felix: Yeah, that's interesting. The grit research is not wrong, but it's right only within the specific setting in which it was conducted.

Alex Edmans: Yes. That's true for many other studies. A lot of studies are done on weird people. You might think, well, that's a strange thing to say, call somebody weird, but weird, central Western, educated, industrialized, rich and democratic, that's where a lot of studies take place. Some of them we can generalize. If it's something like smoking causes cancer in Canadians, that is something where it might well apply the same to Latin Americans, or Chinese people, because that's something which is to do with the body. But if it's something cultural, which is say, the leap between equality of pay and perceived fairness, they may well be different social norms. You're never going to get a study on the particular setting that you're interested in.

Again, perfect should not be the enemy of the good. We want to ask ourselves based on logic, is this a setting which is sufficiently similar? Are there logical reasons to believe that it won't apply here with smoking and cancer? I don't think we really need to be so specific in terms of context, but for something more cultural, behavioural, then the context may matter more.

Ben Felix: We've established that the truth is hard to establish. What steps do you think people should take to seek the truth when they're making an important decision? You've mentioned busy people a few times throughout the conversation. I guess, within that context, what should people be doing?

Alex Edmans: It's to be discerning about any evidence. What I have in the back of the book is just a checklist of simple questions to ask yourself, following the ladder of inference. If there is a statement, is there evidence behind this, or are we just misquoting? If it's a fact, is this one anecdotal, is this here in large scale? If it's large-scale data, is there on turns of explanations, so it might not be a causation? Even if there is causation, what was the setting in which it was gathered? Is it different to what setting we care about?

More generally than just evaluating studies, because studies are not the only source of information. We get information from other people's opinions. Is just to go out there and actively seek different viewpoints and make it clear that you are open to different viewpoints, particularly if you are a leader, or in an authoritative position, like a professor, where people might incorrectly assume that they should not challenge you.

When there is challenge, it may well be that you want to make it clear how much you appreciate it. Let's say, you're the chair of a board, and then you're going to go ahead with a strategy, and then somebody raises an objection. You hear that objection, and you discuss it open-mindedly. But people think that even though that objection is well founded, the benefits outweigh the costs and you still go ahead. Maybe you should go to that person afterwards, or maybe you should do this publicly to the rest of the board and say, I really appreciate you sharing your different viewpoint. Even though we still went ahead with the decision, we will bear your concerns in mind when we execute the strategy. In the future, if you have concerns, please continue to voice them.

If you had not done that, what might the dissenter think? He or she might think, “Well, I had to sacrifice my political capital to raise this concern. I may have annoyed the people who proposed the strategy. I delayed the meeting by half an hour, and I didn't change the decision anyway. In the future, I'm just going to self-sensor.” If you make it clear that you do encourage different viewpoints, then this does mean that you're going to still get differences of opinion, and you're going to get closer to the truth.

Ben Felix: What about at a societal level? What should we be teaching people and thinking about to embed these ideas that you're talking about in our everyday lives?

Alex Edmans: It's to teach the value of critical thinking. You might think, well, at a societal level, is this feasible? Can we teach this in schools? I think we can. Even though, yes, statistics, we might only learn when we're 16, but what I'm highlighting in statistics is not about just numbers. There is not a single equation like that. What statistics is about is looking at all terms of explanations as about being deserving. Just like we teach our kids, don't accept sweets from strangers. There's an alternative explanation for why this stranger is giving you sweets. It's an nefarious one. We also teach kids, well, not directly, but when they read murder mysteries, they learn quickly that it's not the most obvious suspect who committed the murder. They look for alternative suspects.

Given that kids are able to establish in class that I don't think it's unrealistic to teach them the power of thinking about alternative explanations. This is valuable not just for the interpretation of data statistics, but more civil discourse. What we see right now is sometimes people cancelling those with different viewpoints. We see echo chambers. We see a lot of mental health issues when a student might be cancelled by the rest of his, or her peers. If we highlight that even on issues that we think are pretty black and white, they may well be two sides to it, then we are able to encourage people to have a greater respect for opinions that might differ from ourselves.

Ben Felix: Man, accepting candy from strangers as an analogy to reading data that confirms your prior beliefs is pretty good.

Alex Edmans: Yeah. It's actually not too far fetch, because this is indeed why things are so appealing. Just like candy is appealing, we want to accept it. We don't want to ask the question. We simply would like to believe that study and to consume it, just like you want to consume the candy.

Ben Felix: Yeah, awesome. All right, Alex, that's all my questions. We really appreciate you coming back in the podcast. This has been great.

Alex Edmans: Thanks so much for having me back. Really enjoyed the conversation.

Announcer: Portfolio management and brokerage services in Canada are offered exclusively by PWL Capital Inc., which is regulated by the Canadian Investment Regulatory Organization and is a member of the Canadian Investor Protection Fund. Investment advisory services in the United States of America are offered exclusively by OneDigital Investment Advisors LLC. OneDigital and PWL Capital are affiliated entities. However, each company has financial responsibility for only its own products and services.

Nothing herein constitutes an offer, or solicitation to buy or sell any security. This communication is distributed for informational purposes only. The information contained herein has been derived from sources believed to be accurate, but no guarantee as to its accuracy or completeness can be made.

Furthermore, nothing herein should be construed as investment, tax, or legal advice, and/or used to make any investment decisions. Different types of investments and investment strategies have varying degrees of risk and are not suitable for all investors. You should consult with a professional advisor to see how the information contained herein may apply to your individual circumstances. All market indices discussed are unmanaged, do not incur management fees, and cannot be invested indirectly. All investing involves risk of loss and nothing herein should be construed as a guarantee of any specific outcome, or profit.

Past performance is not indicative of, or a guarantee of future results. All statements and opinions presented herein are those of the individual hosts and/or guests, are current only as of this communication’s original publication date and are subject to change without notice. Neither OneDigital nor PWL Capital has any obligation to provide revised statements and/or opinions in the event of changed circumstances.

Is there an error in the transcript? Let us know! Email us at info@rationalreminder.ca.

Be sure to add the episode number for reference.


Participate in our Community Discussion about this Episode:

https://community.rationalreminder.ca/t/episode-361-alex-edmans-finding-the-truth-in-economics-finance-and-life/37875

Books From Today’s Episode: 

May Contain Lies — https://maycontainlies.com/

Grow the Pie — https://mybook.to/Grow-the-Pie

Outliers — https://www.amazon.com/Outliers-Story-Success-Malcolm-Gladwell/dp/0316017930

Why We Sleep — https://www.goodreads.com/book/show/34466963-why-we-sleep 

Start with Why — https://www.amazon.com/Start-Why-Leaders-Inspire-Everyone/dp/1591846447

Grit — https://www.amazon.com/Grit-Passion-Perseverance-Angela-Duckworth/dp/1501111108

Links From Today’s Episode: 

Rational Reminder on iTunes — https://itunes.apple.com/ca/podcast/the-rational-reminder-podcast/id1426530582.

Rational Reminder on Instagram — https://www.instagram.com/rationalreminder/

Rational Reminder on X — https://x.com/RationalRemind

Rational Reminder on TikTok — www.tiktok.com/@rationalreminder

Rational Reminder on YouTube — https://www.youtube.com/channel/

Benjamin Felix — https://pwlcapital.com/our-team/

Benjamin on X — https://x.com/benjaminwfelix

Benjamin on LinkedIn — https://www.linkedin.com/in/benjaminwfelix/

Alex Edmans — https://alexedmans.com/

Alex Edmans on LinkedIn — https://www.linkedin.com/in/aedmans

Alex Edmans on X — https://x.com/aedmans

London Business School — https://www.london.edu/

Fulbright Fellows | MIT — https://ir.mit.edu/projects/fulbright-fellows/

Atkins — https://www.atkins.com/

‘Matthew Walker's “Why We Sleep” Is Riddled with Scientific and Factual Errors’ — https://guzey.com/books/why-we-sleep/

‘Grit: The Power of Passion and Perseverance | Angela Lee Duckworth | TED’ — https://www.youtube.com/watch?v=H14bBuluwB8