Discover
80,000 Hours Podcast

80,000 Hours Podcast
Author: Rob, Luisa, Keiran, and the 80,000 Hours team
Subscribed: 5,358Played: 255,368Subscribe
Share
© All rights reserved
Description
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.
217 Episodes
Reverse
"We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial — which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual. "But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that — plus our speciesism, plus a lot of other biases and forms of ignorance that we have — gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism." — Jeff SeboIn today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds.Links to learn more, summary, and full transcript.They cover:
The non-negligible chance that AI systems will be sentient by 2030
What AI systems might want and need, and how that might affect our moral concepts
What happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?
What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?
What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?
The repugnant conclusion and the rebugnant conclusion
The experience of trying to build the field of AI welfare
What improv comedy can teach us about doing good in the world
And plenty more.
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Is following important political and international news a civic duty — or is it our civic duty to avoid it?It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do. But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.Links to learn more, summary, and full transcript.In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including:
That it overwhelmingly provides us with information we can't usefully act on.
That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant.
That it obscures the big picture, falling into the trap of thinking 'something important happens every day.'
That it's highly addictive, for many people chewing up 10% or more of their waking hours.
That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought.
And plenty more.
Bryan and Rob conclude that if you want to understand the world, you're better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person.In the second half of the episode, Bryan and Rob cover:
Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there's a meaningful chance of it going terribly.
Bryan’s case that rational irrationality on the part of voters leads to many very harmful policy decisions.
How to allocate resources in space.
Bryan's experience homeschooling his kids.
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
"Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur — that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs. And what I chronicle in Pandora's Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past." — Alison YoungIn today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.Links to learn more, summary, and full transcript.They cover:
The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reporting
The Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many years
The time the Soviets had a major anthrax leak, and then hid it for over a decade
The 1977 influenza pandemic caused by vaccine trial gone wrong in China
The last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK
Ways we could get more reliable oversight and accountability for these labs
And the investigative work Alison’s most proud of
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh HarishIn today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.Links to learn more, summary, and full transcript.They cover:
How bad air pollution is for our health and life expectancy
The different kinds of harm that particulate pollution causes
The strength of the evidence that it damages our brain function and reduces our productivity
Whether it was a mistake to switch our attention to climate change and away from air pollution
Whether most listeners to this show should have an air purifier running in their house right now
Where air pollution in India is worst and why, and whether it's going up or down
Where most air pollution comes from
The policy blunders that led to many sources of air pollution in India being effectively unregulated
Why indoor air pollution packs an enormous punch
The politics of air pollution in India
How India ended up spending a lot of money on outdoor air purifiers
The challenges faced by foreign philanthropists in India
Why Santosh has made the grants he has so far
And plenty more
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
"One of our earliest supporters and a dear friend of mine, Mark Lampert, once said to me, “The way I think about it is, imagine that this money were already in the hands of people living in poverty. If I could, would I want to tax it and then use it to finance other projects that I think would benefit them?” I think that's an interesting thought experiment -- and a good one -- to say, “Are there cases in which I think that's justifiable?” — Paul NiehausIn today’s episode, host Luisa Rodriguez interviews Paul Niehaus — co-founder of GiveDirectly — on the case for giving unconditional cash to the world's poorest households.Links to learn more, summary and full transcript.They cover:
The empirical evidence on whether giving cash directly can drive meaningful economic growth
How the impacts of GiveDirectly compare to USAID employment programmes
GiveDirectly vs GiveWell’s top-recommended charities
How long-term guaranteed income affects people's risk-taking and investments
Whether recipients prefer getting lump sums or monthly instalments
How GiveDirectly tackles cases of fraud and theft
The case for universal basic income, and GiveDirectly’s UBI studies in Kenya, Malawi, and Liberia
The political viability of UBI
Plenty more
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy Moore
"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian MorrisIn today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript.They cover:
Some crazy anomalies in the historical record of civilisational progress
Whether we should think about technology from an evolutionary perspective
Whether we ought to expect war to make a resurgence or continue dying out
Why we can't end up living like The Jetsons
Whether stagnation or cyclical recurring futures seem very plausible
What it means that the rate of increase in the economy has been increasing
Whether violence is likely between humans and powerful AI systems
The most likely reasons for Rob and Ian to be really wrong about all of this
How professional historians react to this sort of talk
The future of Ian’s work
Plenty more
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore
"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that.Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." — Seren KellLinks to learn more, summary and full transcript.In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products.They cover:
The basic case for alternative proteins, and why they’re so hard to make
Why fermentation is a surprisingly promising technology for creating delicious alternative proteins
The main scientific challenges that need to be solved to make fermentation even more useful
The progress that’s been made on the cultivated meat front, and what it will take to make cultivated meat affordable
How GFI Europe is helping with some of these challenges
How people can use their careers to contribute to replacing factory farming with alternative proteins
The best part of Seren’s job
Plenty more
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy Moore
"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions. My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum CollinsIn today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.Links to learn more, summary and full transcript.They cover:
How AI could strengthen government capacity, and how that's a double-edged sword
How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't there
To what extent policymakers take different threats from AI seriously
Whether the US and China are in an AI arms race or not
Whether it's OK to transform the world without much of the world agreeing to it
The tyranny of small differences in AI policy
Disagreements between different schools of thought in AI policy, and proposals that could unite them
How the US AI Bill of Rights could be improved
Whether AI will transform the labour market, and whether it will become a partisan political issue
The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them
What listeners might be able to do to help with this whole mess
Panpsychism
Plenty more
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast. So my general conclusion has been that war looks unlikely on some size scales but not on others." — Anders SandbergIn today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.Links to learn more, summary and full transcript.They cover:
The epic new book Anders is working on, and whether he’ll ever finish it
Whether there's a best possible world or we can just keep improving forever
What wars might look like if the galaxy is mostly settled
The impediments to AI or humans making it to other stars
How the universe will end a million trillion years in the future
Whether it’s useful to wonder about whether we’re living in a simulation
The grabby aliens theory
Whether civilizations get more likely to fail the older they get
The best way to generate energy that could ever exist
Black hole bombs
Whether superintelligence is necessary to get a lot of value
The likelihood that life from elsewhere has already visited Earth
And plenty more.
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin EsveltIn today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.Links to learn more, summary and full transcript.They cover:
Why it makes sense to focus on deliberately released pandemics
Case studies of people who actually wanted to kill billions of humans
How many people have the technical ability to produce dangerous viruses
The different threats of stealth and wildfire pandemics that could crash civilisation
The potential for AI models to increase access to dangerous pathogens
Why scientists try to identify new pandemic-capable pathogens, and the case against that research
Technological solutions, including UV lights and advanced PPE
Using CRISPR-based gene drive to fight diseases and reduce animal suffering
And plenty more.
Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Today’s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare.If you want to check out the links, footnotes and figures in today’s article, you can find those here.And if you like this article, you might enjoy a couple of related episodes of this podcast:
#128 – Chris Blattman on the five reasons wars happen
#140 – Bear Braumoeller on the case that war isn’t in decline
Audio mastering and editing for this episode: Dominic ArmstrongAudio Engineering Lead: Ben CordellProducer: Keiran Harris
Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”Links to learn more, summary and full transcript.Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.Toby and Rob also discuss:
The rise and fall of FTX and some of its impacts
What Toby hoped effective altruism would and wouldn't become when he helped to get it off the ground
What utilitarianism has going for it, and what's wrong with it in Toby's view
How to mathematically model the importance of personal integrity
Which AI labs Toby thinks have been acting more responsibly than others
How having a young child affects Toby’s feelings about AI risk
Whether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartial
How Toby ended up being the source of the highest quality images of the Earth from space
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourTranscriptions: Katy Moore
An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon and on Audible.If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift.
Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.Links to learn more, summary and full transcript.On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so.And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead.In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies:1. Developing an Apollo programme for technical AI safety2. Instituting capability audits for AI models3. Buying time by exploiting hardware choke points4. Getting critics involved in directly engineering AI models5. Getting AI labs to be guided by motives other than profit6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working9. Creating a mass public movement that understands AI and can demand the necessary controls10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibriaAs Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything. Rob and Mustafa discuss the above, as well as:
Whether we should be open sourcing AI models
Whether Mustafa's policy views are consistent with his timelines for transformative AI
How people with very different views on these issues get along at AI labs
The failed efforts (so far) to get a wider range of people involved in these decisions
Whether it's dangerous for Mustafa's new company to be training far larger models than GPT-4
Whether we'll be blown away by AI progress over the next year
What mandatory regulations government should be imposing on AI labs right now
Appropriate priorities for the UK's upcoming AI safety summit
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore
"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 -- 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they've invented the complete automation of this thing that they're employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn't stop existing until I think like 1980.So it takes 90 years from the invention of full automation to the full adoption of it in a single company that's a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?" — Michael WebbIn today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people's jobs and the labour market.Links to learn more, summary and full transcript.They cover:
The jobs most and least exposed to AI
Whether we’ll we see mass unemployment in the short term
How long it took other technologies like electricity and computers to have economy-wide effects
Whether AI will increase or decrease inequality
Whether AI will lead to explosive economic growth
What we can we learn from history, and reasons to think this time is different
Career advice for a world of LLMs
Why Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involved
Michael's take as a musician on AI-generated music
And plenty more
If you'd like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he's now hiring! Check out Quantum Leap's website.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah RitchieIn today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism.Links to learn more, summary and full transcript.They cover:
Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could get
Her new book about how we could be the first generation to build a sustainable planet
Whether climate change is the most worrying environmental issue
How we reduced outdoor air pollution
Why Hannah is worried about the state of biodiversity
Solutions that address multiple environmental issues at once
How the world coordinated to address the hole in the ozone layer
Surprises from Our World in Data’s research
Psychological challenges that come up in Hannah’s work
And plenty more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."Links to learn more, summary and full transcript.Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem -- it’s also hiring dozens of scientists and engineers to build out the Superalignment team.Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, “like using one fire to put out another fire.”But Jan’s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities.Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including:
If you don't know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about.
How do you know that these technical problems can be solved at all, even in principle?
At the point that models are able to help with alignment, won't they also be so good at improving capabilities that we're in the middle of an explosion in what AI can do?
In today's interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover:
OpenAI's current plans to achieve 'superalignment' and the reasoning behind them
Why alignment work is the most fundamental and scientifically interesting research in ML
The kinds of people he’s excited to hire to join his team and maybe save the world
What most readers misunderstood about the OpenAI announcement
The three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversight
What the standard should be for confirming whether Jan's team has succeeded
Whether OpenAI should (or will) commit to stop training more powerful general models if they don't think the alignment problem has been solved
Whether Jan thinks OpenAI has deployed models too quickly or too slowly
The many other actors who also have to do their jobs really well if we're going to have a good AI future
Plenty more
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, and if a topic matters to you we do recommend listening to the full episode — but we think these will be a nice upgrade on skipping episodes entirely.Get these highlight episodes by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.Highlights put together by Simon Monsour and Milo McGuire
Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.Links to learn more, summary and full transcript.(As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic, giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours' largest financial supporter.)One point he makes is that people are too narrowly focused on AI becoming 'superintelligent.' While that could happen and would be important, it's not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought.As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security.In today’s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as:
Why we can’t rely on just gradually solving those problems as they come up, the way we usually do with new technologies.
What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists.
Holden’s case against 'hardcore utilitarianism' and what actually motivates him to work hard for a better world.
What the ML and AI safety communities get wrong in Holden's view.
Ways we might succeed with AI just by dumb luck.
The value of laying out imaginable success stories.
Why information security is so important and underrated.
Whether it's good to work at an AI lab that you think is particularly careful.
The track record of futurists’ predictions.
And much more.
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore
In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.Links to learn more, summary and full transcript.Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:They cover:
Whether it's desirable to slow down AI research
The value of engaging with current policy debates even if they don't seem directly important
Which AI business models seem more or less dangerous
Tensions between people focused on existing vs emergent risks from AI
Two major challenges of being a new parent
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore
به چنل ما سر بزنیدhttps://www.youtube.com/watch?v=lD7J9avsFbw
More croaky neophytes who think they're the first people to set foot on every idea they have... then rush on to a podcast to preen.
@28:00: It's always impressive to hear how proud people are to rediscover things that have been researched, discussed, and known for centuries. Here, the guest stumbles through a case for ends justifying means. What could go wrong? This is like listening to intelligent but ignorant 8th graders... or perhaps 1st-yr grad students, who love to claim that a topic has never been studied before, especially if the old concept is wearing a new name.
@15:30: The guest is greatly over-stating binary processing and signalling in neural networks. This is not at all a good explanation.
Ezra Klein's voice is a mix of nasal congestion, lisp, up-talk, vocal fry, New York, and inflated ego.
Rob's suggestion on price-gouging seems pretty poorly considered. There are plenty of historical examples of harmful price-gouging and I can't think of any that were beneficial, particularly not after a disaster. This approach seems wrong economically and morally. Price-gouging after a disaster is almost always a pure windfall. It's economically infeasible to stockpile for very low-probability events, especially if transporting/ delivering the good is difficult. Even if the good can be mass-produced and delivered quickly in response to a demand spike, Rob would be advocating for a moral approach that runs against the grain of human moral intuitions in post-disaster settings. In such contexts, we prefer need-driven distributive justice and, secondarily, equality-based distributive justice. Conversely, Rob is suggesting an equity-based approach wherein the input-output ratio of equity is based on someone's socio-economic status, which is not just irrelevant to their actions in the em
@18:02: Oh really, Rob? Does correlation now imply causation or was journalistic coverage randomly selected and randomly assigned? Good grief.
She seems to be just a self-promoting aggregator. I didn't hear her say anything insightful. Even when pressed multiple times about how her interests pertain to the mission of 80,000 Hours, she just blathered out a few platitudes about the need for people think about things (or worse, thinking "around" issues).
@1:18:38: Lots of very sloppy thinking and careless wording. Many lazy false equivalences--e.g., @1:18:38: equating (a) democrats' fact-based complaints in 2016 (e.g., about foreign interference, the Electoral College), when Clinton conceded the following day and democrats reconciled themselves to Trump's presidency, with (b) republicans spreading bald-faced lies about stolen elections (only the ones they lost, of course) and actively trying to over-throw the election, including through force. If this was her effort to seem apolitical with a ham-handed "both sides do it... or probably will" comment, then she she isn't intelligent enough to have public platforms.
Is Rob's voice being played back at 1.5x? Is he hyped up on coke?
I'm sure Dr Ord is well-intentioned, but I find his arguments here exceptionally weak and thin. (Also, the uhs and ums are rather annoying after a while.)
So much vocal fry
Thank you, that was very inspirational!
A thought provoking, refreshing podcast