SCIENCE, SKEPTICISM, AND TRUTH
Description
Hello everyone, and welcome to Ideas Untrapped podcast. My guest for this episode is Decision Scientist, Oliver Beige - who is returning to the podcast for the third time. Oliver is not just a multidisciplinary expert, he is one of my favourite people in the world. In this episode, we talk about scientific expertise, the norms of academia, peer review, and how it all relates to academic claims about finding the truth. Oliver emphasized the importance of understanding the imperfections in academia, and how moral panics can be used to silence skeptics. I began the conversation with a confession about my arrogance about the belief in science - and closed with my gripe about ‘‘lockdown triumphalism’’. I thoroughly enjoyed this conversation, and I am grateful to Oliver for doing it with me. I hope you all find it useful as well. Thank you for always listening. The full transcript is available below.
Transcript
Tobi;
I mean, it's good to talk to you again, Oliver.
Oliver;
Tobi, again.
Tobi;
This conversation is going to be a little bit different from our previous… well, not so much different, but I guess this time around I have a few things I want to get off my chest as well. And where I would start is with a brief story. So about, I dunno, I’ve forgotten precisely when the book came out, that was Thinking Fast and Slow by the Nobel Laureate Daniel Kahneman. So I had this brief exchange with my partner. She was quite sceptical in her reading of some of the studies that were cited in that book.
And I recall that the attitude was, “I mean, how can a lot of this be possibly true?” And I recall, not like I ever tell her this anyway…but I recall the sort of assured arrogance with which I dismissed some of her arguments and concerns at the time by saying that, oh yeah, these are peer-reviewed academic studies and they are most likely right than you are. So before you question them, you need to come up with something more than this doesn't feel right or it doesn't sound right. And, what do you know? A few years, like two or three years after that particular experience, almost that entire subfield imploded in what is now the reproducibility or the replication crisis, where a lot of these studies didn't replicate, a lot of them were done with very shoddy analysis and methodologies, and Daniel Kahneman himself had to come out to retract parts of the book based on that particular crisis.
So I'm sort of using this to set the background of how I have approached knowledge over my adult life. So as someone who has put a lot of faith naively, I would say, in science, in academia and its norms as something that is optimized for finding the truth. So to my surprise and even sometimes shock - over different stages of my life and recently in my interrogation of the field of development economics, people who work in global development - [at] the amount of politics, partisanship, bias, and even sometimes sheer status games that academics play and how it affects the production of knowledge, it's something that gave me a kind of deep personal crisis. So that's the background to which I'm approaching this conversation with you.
So where I'll start is, from the perspective of simply truth finding, and I know that a lot of people, not just me, think of academia in this way. They are people who are paid to think and research and tell us the truth about the world and about how things work, right? And they are properly incentivized to do that either by the norms in the institutional arrangements that birthed their workflows and, you know, so many other things we have known academia and educational institutions to be. What is wrong with that view - simply academia as a discipline dedicated to truth finding? What is wrong with that view?
Oliver;
There's many things. Starting point is that it was not only Daniel Kahneman, behavioral economics has multiple crises also with Falsified work. Not only with wrong predictions, wrong predictions are bad but acceptable. This is part of doing science, part of knowledge production. But Falsification is, of course, a bigger problem now and they had quite a few scandals in that. The way I approach it always is sort of like a metaphor from baseball. Basically there's something called the Mendoza Line in baseball which is a hitter that has a 200 hitting average. This is like the lowest end of baseball. If you go below 200, then you’re usually dropped off the baseball teams. And on the upper end you have really good hitters that hit an average of like 300 or something. If you have a constant 300 average you usually get like million dollar contracts, right? We can translate this to science in a lot of ways. Of course, there is a lot of effort involved in going from a 200 average to a 300 average to a 20% average of being right to 30% a average of being right. But still if you're at a 300 level, you're still wrong 70% of the time.
And so the conversations I observe, they're people that are not specialists in a field [and] we're trying to figure out who is right in a certain conversation. Talking about conversations in a scientific field we basically try to use simple pointers, right? One of the pointers is of course a paper that has gone through peer review. You see these conversations of like, okay, this paper has not been peer reviewed, this paper has been peer reviewed. But peer review does not create truth. It sort of reduces the likely likelihood of being wrong somewhat but it doesn't give us any indicator of this is true. The underlying mechanism of peer review usually cannot find outright fraud. Cannot detect outright fraud. This happened quite a few times. And also peer review is usually how close is the submitted paper to what the reviewers want to read. There is a quality aspect to it, but ultimately it changes the direction of the paper much more than it changes quality. So academia overall is a very imperfect truth finding mechanism. The goal has to be [that] the money we spend on academic research has to allow us to get a better grasp of so far undiscovered things, undiscovered related relationships, correlations, causal mechanisms, and ultimately, it has to give us a better grasp of future and it has to give us a better grasp of what we should do in order to create better futures. And this all basically comes down to, like, predicting the future or things that were in the past but yet are to be discovered.
Evolution tends to be a science that is focused on the past, looking at things in the past. But there's still things we have to discover, connections we still have to discover. And this is what academia is about. And the money, the social investment we put into academia has to create a social return in the way that we are better off doing the things we need to do to create a better future for everyone. And its [academia] track record in that regard has been quite mixed. That's true.
Tobi;
So let's talk a little bit about incentives here. Someone who has also written quite a lot, who talked so much about some of the issues - I think he's more focused on methods. He's andrew Gelman, the statistician. I read his blog quite a lot, and there's something he consistently allude to and I just want to check with you how much you think that influenced a lot of the things that we see in academia that are not so good, which is the popularity contest - the number of Twitter followers you have; whether you are blue checked or not; bestselling books; Ted Talks that then lead to people making simplistic claims. There's the issue of scientific fraud, right, some of which you alluded to also in behavioral economics, behavioral science generally. There was recently the case of Dan Ariely, who also wrote a very popular book, Predictably Irrational, but who was recently found to have used falsified data. And I recall that you also persistently criticized a lot of people during the pandemic, even till date - a lot of people who made outright wrong predictions with terrible real life consequences because policymakers and politicians were acting under the influence of the “expert” advice of some of these people who will never come out to admit they are wrong and are less likely to even correct their mistakes. So how is the incentive misaligned?
Oliver;
Okay, many questions at once. How does academia work? And like I always like to say that academic truth finding or whatever you want to call it is not too far away from how gossip networks work. The underlying thing is, of course, any kind of communication network is basically sending signals. In this case, snippets of information, claims, hypotheses and the receiver has to make a decision on how credible this information is. You have the two extreme versions, which is basically saying, yeah, I just read this paper and I think this paper makes a good claim and is methodologically sound or I just read this paper and this paper is crap as everything about it is wrong. So you basically start with a factual claim and an evaluation. This happens in science Twitter in the same way a gossip network communicates typically good or bad news about the community. Also, a gossip network communicate hazards within the community, sending warnings, which is what academics have been doing quite a bit over the last two and a half years. And they also have this tendency to, a) exaggerate claims, reduce claims, and [they] also have this tendency to create opposing camps. Because very few middling signals are being retransmitted.
I've been watching the funeral of the Queen, I have no strong opinion about British royalty in either direction so if I post something on Twitter about it, nobody will retweet. And, of course, the two extreme ends will be retwe