Should we trust papers published in top social science journals? (with Daniel Lakens)
Description
Read the full transcript here.
How much should we trust social science papers in top journals? How do we know a paper is trustworthy? Do large datasets mitigate p-hacking? Why doesn't psychology as a field seem to be working towards a grand unified theory? Why aren't more psychological theories written in math? Or are other scientific fields mathematicized to a fault? How do we make psychology cumulative? How can we create environments, especially in academia, that incentivize constructive criticism? Why isn't peer review pulling its weight in terms of catching errors and constructively criticizing papers? What kinds of problems simply can't be caught by peer review? Why is peer review saved for the very end of the publication process? What is "importance hacking"? On what bits of psychological knowledge is there consensus among researchers? When and why do adversarial collaborations fail? Is admission of error a skill that can be taught and learned? How can students be taught that p-hacking is problematic without causing them to over-correct into a failure to explore their problem space thoroughly and efficiently?
Daniel Lakens is an experimental psychologist working at the Human-Technology Interaction group at Eindhoven University of Technology. In addition to his empirical work in cognitive and social psychology, he works actively on improving research methods and statistical inferences, and has published on the importance of replication research, sequential analyses and equivalence testing, and frequentist statistics. Follow him on Twitter / X at @Lakens.
Further reading:
- Nullius in Verba (Daniel's podcast)
Staff
- Spencer Greenberg — Host / Director
- Josh Castle — Producer
- Ryan Kessler — Audio Engineer
- Uri Bram — Factotum
- WeAmplify — Transcriptionists
Music
Affiliates