Discover80,000 Hours Podcast
80,000 Hours Podcast
Claim Ownership

80,000 Hours Podcast

Author: Rob, Luisa, Keiran, and the 80,000 Hours team

Subscribed: 5,775Played: 267,937
Share

Description

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.

Subscribe by searching for '80000 Hours' wherever you get podcasts.

Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.
250 Episodes
Reverse
"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, 'This is the thing that everyone is screaming about?' I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going. "I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long run. I think it’s a bad choice, but I think that is the calculus that they are making." —Nathan CalvinIn today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.Links to learn more, highlights, and full transcript.They cover:What’s actually in SB 1047, and which AI models it would apply to.The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.Why California is taking state-level action rather than waiting for federal regulation.How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:00:57)The interview begins (00:02:30)What risks from AI does SB 1047 try to address? (00:03:10)Supporters and critics of the bill (00:11:03)Misunderstandings about the bill (00:24:07)Competition, open source, and liability concerns (00:30:56)Model size thresholds (00:46:24)How is SB 1047 different from the executive order? (00:55:36)Objections Nathan is sympathetic to (00:58:31)Current status of the bill (01:02:57)How can listeners get involved in work like this? (01:05:00)Luisa's outro (01:11:52)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t spend as much time learning about them at museums; and they’re just harder to spend time with in a lot of ways, I think, for people. So people have pets that are vertebrates that they take care of across the taxonomic groups, and people get familiar with those from going to zoos and watching their behaviours there, and watching nature documentaries and more. But I think the insects are still really underappreciated, and that means that our intuitions are probably more likely to be wrong than with those other groups." —Meghan BarrettIn today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects. If you're interested in getting involved with this work, check out Meghan's recent blog post: I’m into insect welfare! What’s next?Links to learn more, highlights, and full transcript.They cover:The scale of potential insect suffering in the wild, on farms, and in labs.Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants.How size bias might help explain why many people assume insects can’t feel pain.Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods.Challenges facing the nascent field of insect welfare research, and where the main research gaps are.Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects.And much more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:02)The interview begins (00:03:06)What is an insect? (00:03:22)Size diversity (00:07:24)How important is brain size for sentience? (00:11:27)Offspring, parental investment, and lifespan (00:19:00)Cognition and behaviour (00:23:23)The scale of insect suffering (00:27:01)Capacity to suffer (00:35:56)The empirical evidence for whether insects can feel pain (00:47:18)Nociceptors (01:00:02)Integrated nociception (01:08:39)Response to analgesia (01:16:17)Analgesia preference (01:25:57)Flexible self-protective behaviour (01:31:19)Motivational tradeoffs and associative learning (01:38:45)Results (01:43:31)Reasons to be sceptical (01:47:18)Meghan’s probability of sentience in insects (02:10:20)Views of the broader entomologist community (02:18:18)Insect farming (02:26:52)How much to worry about insect farming (02:40:56)Inhumane slaughter and disease in insect farms (02:44:45)Inadequate nutrition, density, and photophobia (02:53:50)Most humane ways to kill insects at home (03:01:33)Challenges in researching this (03:07:53)Most promising reforms (03:18:44)Why Meghan is hopeful about working with the industry (03:22:17)Careers (03:34:08)Insect Welfare Research Society (03:37:16)Luisa's outro (03:47:01)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.Links to learn more, highlights, video, and full transcript.As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.Nick points out what he sees as the biggest virtues of the RSP approach, and then Rob pushes him on some of the best objections he’s found to RSPs being up to the task of keeping AI safe and beneficial. The two also discuss whether it's essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.In addition to all of that, Nick and Rob talk about:What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at podcast@80000hours.org.Chapters:Cold open (00:00:00)Rob’s intro (00:01:00)The interview begins (00:03:44)Scaling laws (00:04:12)Bottlenecks to further progress in making AIs helpful (00:08:36)Anthropic’s responsible scaling policies (00:14:21)Pros and cons of the RSP approach for AI safety (00:34:09)Alternatives to RSPs (00:46:44)Is an internal audit really the best approach? (00:51:56)Making promises about things that are currently technically impossible (01:07:54)Nick’s biggest reservations about the RSP approach (01:16:05)Communicating “acceptable” risk (01:19:27)Should Anthropic’s RSP have wider safety buffers? (01:26:13)Other impacts on society and future work on RSPs (01:34:01)Working at Anthropic (01:36:28)Engineering vs research (01:41:04)AI safety roles at Anthropic (01:48:31)Should concerned people be willing to take capabilities roles? (01:58:20)Recent safety work at Anthropic (02:10:05)Anthropic culture (02:14:35)Overrated and underrated AI applications (02:22:06)Rob’s outro (02:26:36)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore
"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed. People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous." —Jonathan BirchIn today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)Links to learn more, highlights, and full transcript.They cover:Candidates for sentience, such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIsHumanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.Chilling tales about overconfident policies that probably caused significant suffering for decades.How policymakers can act ethically given real uncertainty.Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.Why Jonathan is so excited about citizens’ assemblies.Jonathan’s conversation with the Dalai Lama about whether insects are sentient.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:20)The interview begins (00:03:04)Why does sentience matter? (00:03:31)Inescapable uncertainty about other minds (00:05:43)The “zone of reasonable disagreement” in sentience research (00:10:31)Disorders of consciousness: comas and minimally conscious states (00:17:06)Foetuses and the cautionary tale of newborn pain (00:43:23)Neural organoids (00:55:49)AI sentience and whole brain emulation (01:06:17)Policymaking at the edge of sentience (01:28:09)Citizens’ assemblies (01:31:13)The UK’s Sentience Act (01:39:45)Ways Jonathan has changed his mind (01:47:26)Careers (01:54:54)Discussing animal sentience with the Dalai Lama (01:59:08)Luisa’s outro (02:01:04)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella NevoIn today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.Links to learn more, highlights, and full transcript.They cover:Real-world examples of sophisticated security breaches, and what we can learn from them.Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.New security measures that Sella hopes can mitigate with the growing risks.Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.And plenty more.Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field! Chapters:Cold open (00:00:00)Luisa’s intro (00:00:56)The interview begins (00:02:30)The importance of securing the model weights of frontier AI models (00:03:01)The most sophisticated and surprising security breaches (00:10:22)AI models being leaked (00:25:52)Researching for the RAND report (00:30:11)Who tries to steal model weights? (00:32:21)Malicious code and exploiting zero-days (00:42:06)Human insiders (00:53:20)Side-channel attacks (01:04:11)Getting access to air-gapped networks (01:10:52)Model extraction (01:19:47)Reducing and hardening authorised access (01:38:52)Confidential computing (01:48:05)Red-teaming and security testing (01:53:42)Careers in information security (01:59:54)Sella’s work on flood forecasting systems (02:01:57)Luisa’s outro (02:04:51)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik ButerinCan ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.Links to learn more, highlights, video, and full transcript.Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.In addition to all of that, host Rob Wiblin and Vitalik discuss:AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.Vitalik’s updated p(doom).Whether the social impact of blockchain and crypto has been a disappointment.Whether humans can merge with AI, and if that’s even desirable.The most valuable defensive technologies to accelerate.How to trustlessly identify what everyone will agree is misinformationWhether AGI is offence-dominant or defence-dominant.Vitalik’s updated take on effective altruism.Plenty more.Chapters:Cold open (00:00:00)Rob’s intro (00:00:56)The interview begins (00:04:47)Three different views on technology (00:05:46)Vitalik’s updated probability of doom (00:09:25)Technology is amazing, and AI is fundamentally different from other tech (00:15:55)Fear of totalitarianism and finding middle ground (00:22:44)Should AI be more centralised or more decentralised? (00:42:20)Humans merging with AIs to remain relevant (01:06:59)Vitalik’s “d/acc” alternative (01:18:48)Biodefence (01:24:01)Pushback on Vitalik’s vision (01:37:09)How much do people actually disagree? (01:42:14)Cybersecurity (01:47:28)Information defence (02:01:44)Is AI more offence-dominant or defence-dominant? (02:21:00)How Vitalik communicates among different camps (02:25:44)Blockchain applications with social impact (02:34:37)Rob’s outro (03:01:00)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore
"You don’t necessarily need world-leading compute to create highly risky AI systems. The biggest biological design tools right now, like AlphaFold’s, are orders of magnitude smaller in terms of compute requirements than the frontier large language models. And China has the compute to train these systems. And if you’re, for instance, building a cyber agent or something that conducts cyberattacks, perhaps you also don’t need the general reasoning or mathematical ability of a large language model. You train on a much smaller subset of data. You fine-tune it on a smaller subset of data. And those systems — one, if China intentionally misuses them, and two, if they get proliferated because China just releases them as open source, or China does not have as comprehensive AI regulations — this could cause a lot of harm in the world." —Sihao HuangIn today’s episode, host Luisa Rodriguez speaks to Sihao Huang — a technology and security policy fellow at RAND — about his work on AI governance and tech policy in China, what’s happening on the ground in China in AI development and regulation, and the importance of US–China cooperation on AI governance.Links to learn more, highlights, video, and full transcript.They cover:Whether the US and China are in an AI race, and the global implications if they are.The state of the art of AI in China.China’s response to American export controls, and whether China is on track to indigenise its semiconductor supply chain.How China’s current AI regulations try to maintain a delicate balance between fostering innovation and keeping strict information control over the Chinese people.Whether China’s extensive AI regulations signal real commitment to safety or just censorship — and how AI is already used in China for surveillance and authoritarian control.How advancements in AI could reshape global power dynamics, and Sihao’s vision of international cooperation to manage this responsibly.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:02)The interview begins (00:02:06)Is China in an AI race with the West? (00:03:20)How advanced is Chinese AI? (00:15:21)Bottlenecks in Chinese AI development (00:22:30)China and AI risks (00:27:41)Information control and censorship (00:31:32)AI safety research in China (00:36:31)Could China be a source of catastrophic AI risk? (00:41:58)AI enabling human rights abuses and undermining democracy (00:50:10)China’s semiconductor industry (00:59:47)China’s domestic AI governance landscape (01:29:22)China’s international AI governance strategy (01:49:56)Coordination (01:53:56)Track two dialogues (02:03:04)Misunderstandings Western actors have about Chinese approaches (02:07:34)Complexity thinking (02:14:40)Sihao’s pet bacteria hobby (02:20:34)Luisa's outro (02:22:47)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"Ring one: total annihilation; no cellular life remains. Ring two, another three-mile diameter out: everything is ablaze. Ring three, another three or five miles out on every side: third-degree burns among almost everyone. You are talking about people who may have gone down into the secret tunnels beneath Washington, DC, escaped from the Capitol and such: people are now broiling to death; people are dying from carbon monoxide poisoning; people who followed instructions and went into their basement are dying of suffocation. Everywhere there is death, everywhere there is fire."That iconic mushroom stem and cap that represents a nuclear blast — when a nuclear weapon has been exploded on a city — that stem and cap is made up of people. What is left over of people and of human civilisation." —Annie JacobsenIn today’s episode, host Luisa Rodriguez speaks to Pulitzer Prize finalist and New York Times bestselling author Annie Jacobsen about her latest book, Nuclear War: A Scenario.Links to learn more, highlights, and full transcript.They cover:The most harrowing findings from Annie’s hundreds of hours of interviews with nuclear experts.What happens during the window that the US president would have to decide about nuclear retaliation after hearing news of a possible nuclear attack.The horrific humanitarian impacts on millions of innocent civilians from nuclear strikes.The overlooked dangers of a nuclear-triggered electromagnetic pulse (EMP) attack crippling critical infrastructure within seconds.How we’re on the razor’s edge between the logic of nuclear deterrence and catastrophe, and urgently need reforms to move away from hair-trigger alert nuclear postures.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:03)The interview begins (00:02:28)The first 24 minutes (00:02:59)The Black Book and presidential advisors (00:13:35)False alarms (00:40:43)Russian misperception of US counterattack (00:44:50)A narcissistic madman with a nuclear arsenal (01:00:13)Is escalation inevitable? (01:02:53)Firestorms and rings of annihilation (01:12:56)Nuclear electromagnetic pulses (01:27:34)Continuity of government (01:36:35)Rays of hope (01:41:07)Where we’re headed (01:43:52)Avoiding politics (01:50:34)Luisa’s outro (01:52:29)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
This is the second part of our marathon interview with Carl Shulman. The first episode is on the economy and national security after AGI. You can listen to them in either order!If we develop artificial general intelligence that's reasonably aligned with human goals, it could put a fast and near-free superhuman advisor in everyone's pocket. How would that affect culture, government, and our ability to act sensibly and coordinate together?It's common to worry that AI advances will lead to a proliferation of misinformation and further disconnect us from reality. But in today's conversation, AI expert Carl Shulman argues that this underrates the powerful positive applications the technology could have in the public sphere.Links to learn more, highlights, and full transcript.As Carl explains, today the most important questions we face as a society remain in the "realm of subjective judgement" -- without any "robust, well-founded scientific consensus on how to answer them." But if AI 'evals' and interpretability advance to the point that it's possible to demonstrate which AI models have truly superhuman judgement and give consistently trustworthy advice, society could converge on firm or 'best-guess' answers to far more cases.If the answers are publicly visible and confirmable by all, the pressure on officials to act on that advice could be great. That's because when it's hard to assess if a line has been crossed or not, we usually give people much more discretion. For instance, a journalist inventing an interview that never happened will get fired because it's an unambiguous violation of honesty norms — but so long as there's no universally agreed-upon standard for selective reporting, that same journalist will have substantial discretion to report information that favours their preferred view more often than that which contradicts it.Similarly, today we have no generally agreed-upon way to tell when a decision-maker has behaved irresponsibly. But if experience clearly shows that following AI advice is the wise move, not seeking or ignoring such advice could become more like crossing a red line — less like making an understandable mistake and more like fabricating your balance sheet.To illustrate the possible impact, Carl imagines how the COVID pandemic could have played out in the presence of AI advisors that everyone agrees are exceedingly insightful and reliable. But in practice, a significantly superhuman AI might suggest novel approaches better than any we can suggest.In the past we've usually found it easier to predict how hard technologies like planes or factories will change than to imagine the social shifts that those technologies will create — and the same is likely happening for AI.Carl Shulman and host Rob Wiblin discuss the above, as well as:The risk of society using AI to lock in its values.The difficulty of preventing coups once AI is key to the military and police.What international treaties we need to make this go well.How to make AI superhuman at forecasting the future.Whether AI will be able to help us with intractable philosophical questions.Whether we need dedicated projects to make wise AI advisors, or if it will happen automatically as models scale.Why Carl doesn't support AI companies voluntarily pausing AI research, but sees a stronger case for binding international controls once we're closer to 'crunch time.'Opportunities for listeners to contribute to making the future go well.Chapters:Cold open (00:00:00)Rob’s intro (00:01:16)The interview begins (00:03:24)COVID-19 concrete example (00:11:18)Sceptical arguments against the effect of AI advisors (00:24:16)Value lock-in (00:33:59)How democracies avoid coups (00:48:08)Where AI could most easily help (01:00:25)AI forecasting (01:04:30)Application to the most challenging topics (01:24:03)How to make it happen (01:37:50)International negotiations and coordination and auditing (01:43:54)Opportunities for listeners (02:00:09)Why Carl doesn't support enforced pauses on AI research (02:03:58)How Carl is feeling about the future (02:15:47)Rob’s outro (02:17:37)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore
This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.Links to learn more, highlights, and full transcript.Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business. It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives.And with growth rates this high, it doesn't take long to run up against Earth's physical limits — in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.This creates pressure to move economic activity off-planet. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop AGI that could accomplish everything that the most productive humans can, using the same energy supply?In today's episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:If we're heading towards the above, how come economic growth is slow now and not really increasing?Why have computers and computer chips had so little effect on economic productivity so far?Are self-replicating biological systems a good comparison for self-replicating machine systems?Isn't this just too crazy and weird to be plausible?What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?Might there not be severely declining returns to bigger brains and more training?Wouldn't humanity get scared and pull the brakes if such a transformation kicked off?If this is right, how come economists don't agree?Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?Chapters:Cold open (00:00:00)Rob’s intro (00:01:00)Transitioning to a world where AI systems do almost all the work (00:05:21)Economics after an AI explosion (00:14:25)Objection: Shouldn’t we be seeing economic growth rates increasing today? (00:59:12)Objection: Speed of doubling time (01:07:33)Objection: Declining returns to increases in intelligence? (01:11:59)Objection: Physical transformation of the environment (01:17:39)Objection: Should we expect an increased demand for safety and security? (01:29:14)Objection: “This sounds completely whack” (01:36:10)Income and wealth distribution (01:48:02)Economists and the intelligence explosion (02:13:31)Baumol effect arguments (02:19:12)Denying that robots can exist (02:27:18)Classic economic growth models (02:36:12)Robot nannies (02:48:27)Slow integration of decision-making and authority power (02:57:39)Economists’ mistaken heuristics (03:01:07)Moral status of AIs (03:11:45)Rob’s outro (04:11:47)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore
"One of the most amazing things about planet Earth is that there are complex bags of mostly water — you and me – and we can look up at the stars, and look into our brains, and try to grapple with the most complex, difficult questions that there are. And even if we can’t make great progress on them and don’t come to completely satisfying solutions, just the fact of trying to grapple with these things is kind of the universe looking at itself and trying to understand itself. So we’re kind of this bright spot of reflectiveness in the cosmos, and I think we should celebrate that fact for its own intrinsic value and interestingness." —Eric SchwitzgebelIn today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel — professor of philosophy at UC Riverside — about some of the most bizarre and unintuitive claims from his recent book, The Weirdness of the World.Links to learn more, highlights, and full transcript.They cover:Why our intuitions seem so unreliable for answering fundamental questions about reality.What the materialist view of consciousness is, and how it might imply some very weird things — like that the United States could be a conscious entity.Thought experiments that challenge our intuitions — like supersquids that think and act through detachable tentacles, and intelligent species whose brains are made up of a million bugs.Eric’s claim that consciousness and cosmology are universally bizarre and dubious.How to think about borderline states of consciousness, and whether consciousness is more like a spectrum or more like a light flicking on.The nontrivial possibility that we could be dreaming right now, and the ethical implications if that’s true.Why it’s worth it to grapple with the universe’s most complex questions, even if we can’t find completely satisfying solutions.And much more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:10)Bizarre and dubious philosophical theories (00:03:13)The materialist view of consciousness (00:13:55)What would it mean for the US to be conscious? (00:19:46)Supersquids and antheads thought experiments (00:22:37)Alternatives to the materialist perspective (00:35:19)Are our intuitions useless for thinking about these things? (00:42:55)Key ingredients for consciousness (00:46:46)Reasons to think the US isn’t conscious (01:01:15)Overlapping consciousnesses [01:09:32]Borderline cases of consciousness (01:13:22)Are we dreaming right now? (01:40:29)Will we ever have answers to these dubious and bizarre questions? (01:56:16)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don’t get me wrong. I don’t think that they should have charged $5,000 or $6,000. That’s not ethical. It’s also not economically efficient, because they didn’t cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people."But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they’re not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I’m a firm, but it’s a very efficient strategy for society, and so we’ve got to bridge that gap." —Rachel GlennersterIn today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.Links to learn more, highlights, and full transcript.They cover:How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.The challenges in designing effective pull mechanisms, from design to implementation.Why it’s important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.The massive benefits of accelerating vaccine development, in some cases, even if it’s only by a few days or weeks.The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.The shortlist of ideas from the Market Shaping Accelerator’s recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.“Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.Lessons from Rachel’s career at the forefront of global development, and how insights from economics can drive transformative change.And much more.Chapters:The Market Shaping Accelerator (00:03:33)Pull mechanisms for innovation (00:13:10)Accelerating the pneumococcal and COVID vaccines (00:19:05)Advance market commitments (00:41:46)Is this uncertainty hard for funders to plan around? (00:49:17)The story of the malaria vaccine that wasn’t (00:57:15)Challenges with designing and implementing AMCs and other pull mechanisms (01:01:40)Universal COVID vaccine (01:18:14)Climate-resilient crops (01:34:09)The Market Shaping Accelerator’s Innovation Challenge (01:45:40)Indoor air quality to reduce respiratory infections (01:49:09)Repurposing generic drugs (01:55:50)Clean air conditioning units (02:02:41)Broad-spectrum antivirals for pandemic prevention (02:09:11)Improving education in low- and middle-income countries (02:15:53)What’s still weird for Rachel about living in the US? (02:45:06)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff."Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt ClancyIn today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.Links to learn more, highlights, and full transcript.They cover:Whether scientific progress is actually net positive for humanity.Scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors.Why Matt thinks metascience research and targeted funding could improve the scientific process and better incentivise outcomes that are good for humanity.Whether Matt trusts domain experts or superforecasters more when estimating how the future will turn out.Why Matt is sceptical that AGI could really cause explosive economic growth.And much more.Chapters:Is scientific progress net positive for humanity? (00:03:00)The time of biological perils (00:17:50)Modelling the benefits of science (00:25:48)Income and health gains from scientific progress (00:32:49)Discount rates (00:42:14)How big are the returns to science? (00:51:08)Forecasting global catastrophic biological risks from scientific progress (01:05:20)What’s the value of scientific progress, given the risks? (01:15:09)Factoring in extinction risk (01:21:56)How science could reduce extinction risk (01:30:18)Are we already too late to delay the time of perils? (01:42:38)Domain experts vs superforecasters (01:46:03)What Open Philanthropy’s Innovation Policy programme settled on (01:53:47)Explosive economic growth (02:06:28)Matt’s favourite thought experiment (02:34:57)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"Earth economists, when they measure how bad the potential for exploitation is, they look at things like, how is labour mobility? How much possibility do labourers have otherwise to go somewhere else? Well, if you are on the one company town on Mars, your labour mobility is zero, which has never existed on Earth. Even in your stereotypical West Virginian company town run by immigrant labour, there’s still, by definition, a train out. On Mars, you might not even be in the launch window. And even if there are five other company towns or five other settlements, they’re not necessarily rated to take more humans. They have their own oxygen budget, right? "And so economists use numbers like these, like labour mobility, as a way to put an equation and estimate the ability of a company to set noncompetitive wages or to set noncompetitive work conditions. And essentially, on Mars you’re setting it to infinity." — Zach WeinersmithIn today’s episode, host Luisa Rodriguez speaks to Zach Weinersmith — the cartoonist behind Saturday Morning Breakfast Cereal — about the latest book he wrote with his wife Kelly: A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?Links to learn more, highlights, and full transcript.They cover:Why space travel is suddenly getting a lot cheaper and re-igniting enthusiasm around space settlement.What Zach thinks are the best and worst arguments for settling space.Zach’s journey from optimistic about space settlement to a self-proclaimed “space bastard” (pessimist).How little we know about how microgravity and radiation affects even adults, much less the children potentially born in a space settlement.A rundown of where we could settle in the solar system, and the major drawbacks of even the most promising candidates.Why digging bunkers or underwater cities on Earth would beat fleeing to Mars in a catastrophe.How new space settlements could look a lot like old company towns — and whether or not that’s a bad thing.The current state of space law and how it might set us up for international conflict.How space cannibalism legal loopholes might work on the International Space Station.And much more.Chapters:Space optimism and space bastards (00:03:04)Bad arguments for why we should settle space (00:14:01)Superficially plausible arguments for why we should settle space (00:28:54)Is settling space even biologically feasible? (00:32:43)Sex, pregnancy, and child development in space (00:41:41)Where’s the best space place to settle? (00:55:02)Creating self-sustaining habitats (01:15:32)What about AI advances? (01:26:23)A roadmap for settling space (01:33:45)Space law (01:37:22)Space signalling and propaganda (01:51:28) Space war (02:00:40)Mining asteroids (02:06:29)Company towns and communes in space (02:10:55)Sending digital minds into space (02:26:37)The most promising space governance models (02:29:07)The tragedy of the commons (02:35:02)The tampon bandolier and other bodily functions in space (02:40:14)Is space cannibalism legal? (02:47:09)The pregnadrome and other bizarre proposals (02:50:02)Space sexism (02:58:38)What excites Zach about the future (03:02:57)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"I work in a place called Uttar Pradesh, which is a state in India with 240 million people. One in every 33 people in the whole world lives in Uttar Pradesh. It would be the fifth largest country if it were its own country. And if it were its own country, you’d probably know about its human development challenges, because it would have the highest neonatal mortality rate of any country except for South Sudan and Pakistan. Forty percent of children there are stunted. Only two-thirds of women are literate. So Uttar Pradesh is a place where there are lots of health challenges."And then even within that, we’re working in a district called Bahraich, where about 4 million people live. So even that district of Uttar Pradesh is the size of a country, and if it were its own country, it would have a higher neonatal mortality rate than any other country. In other words, babies born in Bahraich district are more likely to die in their first month of life than babies born in any country around the world." — Dean SpearsIn today’s episode, host Luisa Rodriguez speaks to Dean Spears — associate professor of economics at the University of Texas at Austin and founding director of r.i.c.e. — about his experience implementing a surprisingly low-tech but highly cost-effective kangaroo mother care programme in Uttar Pradesh, India to save the lives of vulnerable newborn infants.Links to learn more, highlights, and full transcript.They cover:The shockingly high neonatal mortality rates in Uttar Pradesh, India, and how social inequality and gender dynamics contribute to poor health outcomes for both mothers and babies.The remarkable benefits for vulnerable newborns that come from skin-to-skin contact and breastfeeding support.The challenges and opportunities that come with working with a government hospital to implement new, evidence-based programmes.How the currently small programme might be scaled up to save more newborns’ lives in other regions of Uttar Pradesh and beyond.How targeted health interventions stack up against direct cash transfers.Plus, a sneak peak into Dean’s new book, which explores the looming global population peak that’s expected around 2080, and the consequences of global depopulation.And much more.Chapters:Why is low birthweight a major problem in Uttar Pradesh? (00:02:45)Neonatal mortality and maternal health in Uttar Pradesh (00:06:10)Kangaroo mother care (00:12:08)What would happen without this intervention? (00:16:07)Evidence of KMC’s effectiveness (00:18:15)Longer-term outcomes (00:32:14)GiveWell’s support and implementation challenges (00:41:13)How can KMC be so cost effective? (00:52:38)Programme evaluation (00:57:21)Is KMC is better than direct cash transfers? (00:59:12)Expanding the programme and what skills are needed (01:01:29)Fertility and population decline (01:07:28)What advice Dean would give his younger self (01:16:09)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we’ve modelled out every possibility and we’ve found that it works.' I think another possibility, which I don’t understand as well, is that AI could lock in current moral values. And I think in particular there’s a risk that if AI is learning from what we do as humans today, the lesson it’s going to learn is that it’s OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there’s a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis BollardIn today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.Links to learn more, highlights, and full transcript.They cover:The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.Work to improve farmed animal welfare that Open Philanthropy is excited about funding.The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.The occasional tension between ending factory farming and curbing climate changeHow AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit.How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.And much more.Chapters:Common objections to ending factory farming (00:13:21)Potential solutions (00:30:55)Cage-free reforms (00:34:25)Broiler chicken welfare (00:46:48)Do companies follow through on these commitments? (01:00:21)Fish welfare (01:05:02)Alternatives to animal proteins (01:16:36)Farm animal welfare in Asia (01:26:00)Farm animal welfare in Europe (01:30:45)Animal welfare science (01:42:09)Approaches Lewis is less excited about (01:52:10)Will we end factory farming in our lifetimes? (01:56:36)Effect of AI (01:57:59)Recent big wins for farm animals (02:07:38)How animal advocacy has changed since Lewis first got involved (02:15:57)Response to the Moral Weight Project (02:19:52)How to help (02:28:14)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. Links to learn more, summary, and full transcript.In today’s episode, host Rob Wiblin asks Zvi for his takes on:US-China negotiationsWhether AI progress has stalledThe biggest wins and losses for alignment in 2023EU and White House AI regulationsWhich major AI lab has the best safety strategyThe pros and cons of the Pause AI movementRecent breakthroughs in capabilitiesIn what situations it’s morally acceptable to work at AI labsWhether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.Zvi and Rob also talk about:The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.And plenty more.Chapters:Zvi’s AI-related worldview (00:03:41)Sleeper agents (00:05:55)Safety plans of the three major labs (00:21:47)Misalignment vs misuse vs structural issues (00:50:00)Should concerned people work at AI labs? (00:55:45)Pause AI campaign (01:30:16)Has progress on useful AI products stalled? (01:38:03)White House executive order and US politics (01:42:09)Reasons for AI policy optimism (01:56:38)Zvi’s day-to-day (02:09:47)Big wins and losses on safety and alignment in 2023 (02:12:29)Other unappreciated technical breakthroughs (02:17:54)Concrete things we can do to mitigate risks (02:31:19)Balsa Research and the Jones Act (02:34:40)The National Environmental Policy Act (02:50:36)Housing policy (02:59:59)Underrated rationalist worldviews (03:16:22)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore
Today’s release is a reading of our career review of AI governance and policy, written and narrated by Cody Fenwick.Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks, and there are opportunities in the broad field of AI governance to positively shape how society responds to and prepares for the challenges posed by the technology.Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.If you want to check out the links, footnotes and figures in today’s article, you can find those here.Editing and audio proofing: Ben Cordell and Simon MonsourNarration: Cody Fenwick
"When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, 'I solved your problem.' What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings. A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, 'Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?' And they’d be like, 'Hell no!' It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it." —Spencer GreenbergIn today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.Links to learn more, summary, and full transcript.They cover:How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.The importance of hype in making valuable things happen.How to recognise warning signs that someone is untrustworthy or likely to hurt you.Whether Registered Reports are successfully solving reproducibility issues in science.The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.The potential harms of lightgassing, which is the opposite of gaslighting.How Spencer’s team used non-statistical methods to test whether astrology works.Whether there’s any social value in retaliation.And much more.Chapters:Does money make you happy? (00:05:54)Hype vs value (00:31:27)Warning signs that someone is bad news (00:41:25)Integrity and reproducibility in social science research (00:57:54)Personal principles (01:16:22)Decision-making errors (01:25:56)Lightgassing (01:49:23)Astrology (02:02:26)Game theory, tit for tat, and retaliation (02:20:51)Parenting (02:30:00)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore
"[One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics."Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered." — Bob FischerIn today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project.Links to learn more, summary, and full transcript.They cover:The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach.Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.The results that most surprised Bob.Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.Confronting our own biases when estimating animal mental capacities and moral worth.The limitations of using neuron counts as a proxy for moral weights.How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.And plenty more.Chapters:Welfare ranges (00:10:19)Historical assessments (00:16:47)Method (00:24:02)The present / absent approach (00:27:39)Results (00:31:42)Chickens (00:32:42)Bees (00:50:00)Salmon and limits of methodology (00:56:18)Octopuses (01:00:31)Pigs (01:27:50)Surprises about the project (01:30:19)Objections to the project (01:34:25)Alternative decision theories and risk aversion (01:39:14)Hedonism assumption (02:00:54)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
loading
Comments (14)

Mr.Robot

به چنل ما سر بزنیدhttps://www.youtube.com/watch?v=lD7J9avsFbw

Nov 18th
Reply

ncooty

More croaky neophytes who think they're the first people to set foot on every idea they have... then rush on to a podcast to preen.

May 7th
Reply

ncooty

@28:00: It's always impressive to hear how proud people are to rediscover things that have been researched, discussed, and known for centuries. Here, the guest stumbles through a case for ends justifying means. What could go wrong? This is like listening to intelligent but ignorant 8th graders... or perhaps 1st-yr grad students, who love to claim that a topic has never been studied before, especially if the old concept is wearing a new name.

Apr 17th
Reply

ncooty

@15:30: The guest is greatly over-stating binary processing and signalling in neural networks. This is not at all a good explanation.

Apr 17th
Reply

ncooty

Ezra Klein's voice is a mix of nasal congestion, lisp, up-talk, vocal fry, New York, and inflated ego.

Apr 17th
Reply

ncooty

Rob's suggestion on price-gouging seems pretty poorly considered. There are plenty of historical examples of harmful price-gouging and I can't think of any that were beneficial, particularly not after a disaster. This approach seems wrong economically and morally. Price-gouging after a disaster is almost always a pure windfall. It's economically infeasible to stockpile for very low-probability events, especially if transporting/ delivering the good is difficult. Even if the good can be mass-produced and delivered quickly in response to a demand spike, Rob would be advocating for a moral approach that runs against the grain of human moral intuitions in post-disaster settings. In such contexts, we prefer need-driven distributive justice and, secondarily, equality-based distributive justice. Conversely, Rob is suggesting an equity-based approach wherein the input-output ratio of equity is based on someone's socio-economic status, which is not just irrelevant to their actions in the em

Apr 17th
Reply

ncooty

@18:02: Oh really, Rob? Does correlation now imply causation or was journalistic coverage randomly selected and randomly assigned? Good grief.

Apr 16th
Reply

ncooty

She seems to be just a self-promoting aggregator. I didn't hear her say anything insightful. Even when pressed multiple times about how her interests pertain to the mission of 80,000 Hours, she just blathered out a few platitudes about the need for people think about things (or worse, thinking "around" issues).

Apr 15th
Reply

ncooty

@1:18:38: Lots of very sloppy thinking and careless wording. Many lazy false equivalences--e.g., @1:18:38: equating (a) democrats' fact-based complaints in 2016 (e.g., about foreign interference, the Electoral College), when Clinton conceded the following day and democrats reconciled themselves to Trump's presidency, with (b) republicans spreading bald-faced lies about stolen elections (only the ones they lost, of course) and actively trying to over-throw the election, including through force. If this was her effort to seem apolitical with a ham-handed "both sides do it... or probably will" comment, then she she isn't intelligent enough to have public platforms.

Apr 15th
Reply

ncooty

Is Rob's voice being played back at 1.5x? Is he hyped up on coke?

Apr 15th
Reply

ncooty

I'm sure Dr Ord is well-intentioned, but I find his arguments here exceptionally weak and thin. (Also, the uhs and ums are rather annoying after a while.)

Nov 12th
Reply

ncooty

So much vocal fry

Nov 12th
Reply

Ina Iśka

Thank you, that was very inspirational!

Jul 18th
Reply

Sam McMahon

A thought provoking, refreshing podcast

Oct 26th
Reply