DiscoverTwo Psychologists Four BeersEpisode 104: Quantifying the Narrative of Replicable Science
Episode 104: Quantifying the Narrative of Replicable Science

Episode 104: Quantifying the Narrative of Replicable Science

Update: 2023-03-29
Share

Description

Yoel and Alexa discuss a recent paper that takes a machine learning approach to estimating the replicability of psychology as a discipline. The researchers' investigation begins with a training process, in which an artificial intelligence model identifies ways that textual descriptions differ for studies that pass versus fail manual replication tests. This model is then applied to a set of 14,126 papers published in six well-known psychology journals over the past 20 years, picking up on the textual markers that it now recognizes as signals of replicable findings. In a mysterious twist, these markers remain hidden in the black box of the algorithm. However, the researchers hand-examine a few markers of their own, testing whether things like subfield, author expertise, and media interest are associated with the replicability of findings. And, as if machine learning models weren't juicy enough, Yoel trolls Alexa with an intro topic hand-selected to infuriate her.

Links:

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Episode 104: Quantifying the Narrative of Replicable Science

Episode 104: Quantifying the Narrative of Replicable Science

Yoel Inbar, Michael Inzlicht, and Alexa Tullett