Discover80,000 Hours Podcast
Claim Ownership
80,000 Hours Podcast
Author: Rob, Luisa, and the 80,000 Hours team
Subscribed: 5,860Played: 274,634Subscribe
Share
© All rights reserved
Description
Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Hosted by Rob Wiblin and Luisa Rodriguez.
Subscribe by searching for '80000 Hours' wherever you get podcasts.
Hosted by Rob Wiblin and Luisa Rodriguez.
265 Episodes
Reverse
Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their area can usually find some way to block them, even if the benefits to society outweigh the costs 10 or 100 times over.The result of this ‘vetocracy’ has been skyrocketing rent in major cities — not to mention exacerbating homelessness, energy poverty, and a host of other social maladies. This has been known for years but precious little progress has been made. When trains, tunnels, or nuclear reactors are occasionally built, they’re comically expensive and slow compared to 50 years ago. And housing construction in the UK and California has barely increased, remaining stuck at less than half what it was in the ’60s and ’70s.Today’s guest — economist and editor of Works in Progress Sam Bowman — isn’t content to just condemn the Not In My Backyard (NIMBY) mentality behind this stagnation. He wants to actually get a tonne of stuff built, and by that standard the strategy of attacking ‘NIMBYs’ has been an abject failure. They are too politically powerful, and if you try to crush them, sooner or later they crush you.Links to learn more, highlights, video, and full transcript.So, as Sam explains, a different strategy is needed, one that acknowledges that opponents of development are often correct that a given project will make them worse off. But the thing is, in the cases we care about, these modest downsides are outweighed by the enormous benefits to others — who will finally have a place to live, be able to get to work, and have the energy to heat their home.But democracies are majoritarian, so if most existing residents think they’ll be a little worse off if more dwellings are built in their area, it’s no surprise they aren’t getting built. Luckily we already have a simple way to get people to do things they don’t enjoy for the greater good, a strategy that we apply every time someone goes in to work at a job they wouldn’t do for free: compensate them. Sam thinks this idea, which he calls “Coasean democracy,” could create a politically sustainable majority in favour of building and underlies the proposals he thinks have the best chance of success — which he discusses in detail with host Rob Wiblin.Chapters:Cold open (00:00:00)Introducing Sam Bowman (00:00:59)We can’t seem to build anything (00:02:09)Our inability to build is ruining people's lives (00:04:03)Why blocking growth of big cities is terrible for science and invention (00:09:15)It's also worsening inequality, health, fertility, and political polarisation (00:14:36)The UK as the 'limit case' of restrictive planning permission gone mad (00:17:50)We've known this for years. So why almost no progress fixing it? (00:36:34)NIMBYs aren't wrong: they are often harmed by development (00:43:58)Solution #1: Street votes (00:55:37)Are street votes unfair to surrounding areas? (01:08:31)Street votes are coming to the UK — what to expect (01:15:07)Are street votes viable in California, NY, or other countries? (01:19:34)Solution #2: Benefit sharing (01:25:08)Property tax distribution — the most important policy you've never heard of (01:44:29)Solution #3: Opt-outs (01:57:53)How to make these things happen (02:11:19)Let new and old institutions run in parallel until the old one withers (02:18:17)The evil of modern architecture and why beautiful buildings are essential (02:31:58)Northern latitudes need nuclear power — solar won't be enough (02:45:01)Ozempic is still underrated and “the overweight theory of everything” (03:02:30)How has progress studies remained sane while being very online? (03:17:55)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongTranscriptions: Katy Moore
"I really don’t want to give the impression that I think it is easy to make predictable, controlled, safe interventions in wild systems where there are many species interacting. I don’t think it’s easy, but I don’t see any reason to think that it’s impossible. And I think we have been making progress. I think there’s every reason to think that if we continue doing research, both at the theoretical level — How do ecosystems work? What sorts of things are likely to have what sorts of indirect effects? — and then also at the practical level — Is this intervention a good idea? — I really think we’re going to come up with plenty of things that would be helpful to plenty of animals." —Cameron Meyer ShorbIn today’s episode, host Luisa Rodriguez speaks to Cameron Meyer Shorb — executive director of the Wild Animal Initiative — about the cutting-edge research on wild animal welfare.Links to learn more, highlights, and full transcript.They cover:How it’s almost impossible to comprehend the sheer number of wild animals on Earth — and why that makes their potential suffering so important to consider.How bad experiences like disease, parasites, and predation truly are for wild animals — and how we would even begin to study that empirically.The tricky ethical dilemmas in trying to help wild animals without unintended consequences for ecosystems or other potentially sentient beings.Potentially promising interventions to help wild animals — like selective reforestation, vaccines, fire management, and gene drives.Why Cameron thinks the best approach to improving wild animal welfare is to first build a dedicated research field — and how Wild Animal Initiative’s activities support this.The many career paths in science, policy, and technology that could contribute to improving wild animal welfare.And much more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:04)The interview begins (00:03:40)One concrete example of how we might improve wild animal welfare (00:04:04)Why should we care about wild animal suffering? (00:10:00)What’s it like to be a wild animal? (00:19:37)Suffering and death in the wild (00:29:19)Positive, benign, and social experiences (00:51:33)Indicators of welfare (01:01:40)Can we even help wild animals without unintended consequences? (01:13:20)Vaccines for wild animals (01:30:59)Fire management (01:44:20)Gene drive technologies (01:47:42)Common objections and misconceptions about wild animal welfare (01:53:19)Future promising interventions (02:21:58)What’s the long game for wild animal welfare? (02:27:46)Eliminating the biological basis for suffering (02:33:21)Optimising for high-welfare landscapes (02:37:33)Wild Animal Initiative’s work (02:44:11)Careers in wild animal welfare (02:58:13)Work-related guilt and shame (03:12:57)Luisa's outro (03:19:51)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right?Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them?That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.Links to learn more, highlights, video, and full transcript.As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:Can fire the CEO.Would receive all the profits after the point OpenAI makes 100x returns on investment.Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.Chapters:Cold open (00:00:00)What's coming up (00:00:50)Who is Rose Chan Loui? (00:03:11)How OpenAI carefully chose a complex nonprofit structure (00:04:17)OpenAI's new plan to become a for-profit (00:11:47)The nonprofit board is out-resourced and in a tough spot (00:14:38)Who could be cheated in a bad conversion to a for-profit? (00:17:11)Is this a unique case? (00:27:24)Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:28:58)The crazy difficulty of valuing the profits OpenAI might make (00:35:21)Control of OpenAI is independently incredibly valuable and requires compensation (00:41:22)It's very important the nonprofit get cash and not just equity (and few are talking about it) (00:51:37)Is it a farce to call this an "arm's-length transaction"? (01:03:50)How the nonprofit board can best play their hand (01:09:04)Who can mount a court challenge and how that would work (01:15:41)Rob's outro (01:21:25)Producer: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo editing: Simon MonsourTranscriptions: Katy Moore
"I think stories are the way we shift the Overton window — so widen the range of things that are acceptable for policy and palatable to the public. Almost by definition, a lot of things that are going to be really important and shape the future are not in the Overton window, because they sound weird and off-putting and very futuristic. But I think stories are the best way to bring them in." — Elizabeth CoxIn today’s episode, Keiran Harris speaks with Elizabeth Cox — founder of the independent production company Should We Studio — about the case that storytelling can improve the world.Links to learn more, highlights, and full transcript.They cover:How TV shows and movies compare to novels, short stories, and creative nonfiction if you’re trying to do good.The existing empirical evidence for the impact of storytelling.Their competing takes on the merits of thinking carefully about target audiences.Whether stories can really change minds on deeply entrenched issues, or whether writers need to have more modest goals.Whether humans will stay relevant as creative writers with the rise of powerful AI models.Whether you can do more good with an overtly educational show vs other approaches.Elizabeth’s experience with making her new five-part animated show Ada — including why she chose the topics of civilisational collapse, kidney donations, artificial wombs, AI, and gene drives.The pros and cons of animation as a medium.Career advice for creative writers.Keiran’s idea for a longtermist Christmas movie.And plenty more.Material you might want to check out before listening:The trailer for Elizabeth’s new animated series Ada — the full series will be available on TED-Ed’s YouTube channel in early January 2025Keiran’s pilot script and a 10-episode outline for his show Bequest, and his post about the show on the Effective Altruism ForumChapters:Cold open (00:00:00)Luisa's intro (00:01:04)The interview begins (00:02:52)Is storytelling really a high-impact career option? (00:03:26)Empirical evidence of the impact of storytelling (00:06:51)How storytelling can inform us (00:16:25)How long will humans stay relevant as creative writers? (00:21:54)Ada (00:33:05)Debating the merits of thinking about target audiences (00:38:03)Ada vs other approaches to impact-focused storytelling (00:48:18)Why animation (01:01:06)One Billion Christmases (01:04:54)How storytelling can humanise (01:09:34)But can storytelling actually change strongly held opinions? (01:13:26)Novels and short stories (01:18:38)Creative nonfiction (01:25:06)Other promising ways of storytelling (01:30:53)How did Ada actually get made? (01:33:23)The hardest part of the process for Elizabeth (01:48:28)Elizabeth’s hopes and dreams for Ada (01:53:10)Designing Ada with an eye toward impact (01:59:16)Alternative topics for Ada (02:05:33)Deciding on the best way to get Ada in front of people (02:07:12)Career advice for creative writers (02:11:31)Wikipedia book spoilers (02:17:05)Luisa's outro (02:20:42)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
"I think one of the reasons I took [shutting down my charity] so hard is because entrepreneurship is all about this bets-based mindset. So you say, “I’m going to take a bunch of bets. I’m going to take some risky bets that have really high upside.” And this is a winning strategy in life, but maybe it’s not a winning strategy for any given hand. So the fact of the matter is that I believe that intellectually, but l do not believe that emotionally. And I have now met a bunch of people who are really good at doing that emotionally, and I’ve realised I’m just not one of those people. I think I’m more entrepreneurial than your average person; I don’t think I’m the maximally entrepreneurial person. And I also think it’s just human nature to not like failing." —Sarah Eustis-GuthrieIn today’s episode, host Luisa Rodriguez speaks to Sarah Eustis-Guthrie — cofounder of the now-shut-down Maternal Health Initiative, a postpartum family planning nonprofit in Ghana — about her experience starting and running MHI, and ultimately making the difficult decision to shut down when the programme wasn’t as impactful as they expected.Links to learn more, highlights, and full transcript.They cover:The evidence that made Sarah and her cofounder Ben think their organisation could be super impactful for women — both from a health perspective and an autonomy and wellbeing perspective.Early yellow and red flags that maybe they didn’t have the full story about the effectiveness of the intervention.All the steps Sarah and Ben took to build the organisation — and where things went wrong in retrospect.Dealing with the emotional side of putting so much time and effort into a project that ultimately failed.Why it’s so important to talk openly about things that don’t work out, and Sarah’s key lessons learned from the experience.The misaligned incentives that discourage charities from shutting down ineffective programmes.The movement of trust-based philanthropy, and Sarah’s ideas to further improve how global development charities get their funding and prioritise their beneficiaries over their operations.The pros and cons of exploring and pivoting in careers.What it’s like to participate in the Charity Entrepreneurship Incubation Program, and how listeners can assess if they might be a good fit.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:00:58)The interview begins (00:03:43)The case for postpartum family planning as an impactful intervention (00:05:37)Deciding where to start the charity (00:11:34)How do you even start implementing a charity programme? (00:18:33)Early yellow and red flags (00:22:56)Proof-of-concept tests and pilot programme in Ghana (00:34:10)Dealing with disappointing pilot results (00:53:34)The ups and downs of founding an organisation (01:01:09)Post-pilot research and reflection (01:05:40)Is family planning still a promising intervention? (01:22:59)Deciding to shut down MHI (01:34:10)The surprising community response to news of the shutdown (01:41:12)Mistakes and what Sarah could have done differently (01:48:54)Sharing results in the space of postpartum family planning (02:00:54)Should more charities scale back or shut down? (02:08:33)Trust-based philanthropy (02:11:15)Empowering the beneficiaries of charities’ work (02:18:04)The tough ask of getting nonprofits to act when a programme isn’t working (02:21:18)Exploring and pivoting in careers (02:27:01)Reevaluation points (02:29:55)PlayPumps were even worse than you might’ve heard (02:33:25)Charity Entrepreneurship (02:38:30)The mistake of counting yourself out too early (02:52:37)Luisa’s outro (02:57:50)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
With kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them. Links to learn more and full transcript.After hearing 8 former guests’ insights, Luisa and Rob chat about:Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months).What have been the biggest surprises for Rob in becoming a parent.How Rob's dealt with work and parenting tradeoffs, and his advice for other would-be parents.Rob's list of recommended purchases for new or upcoming parents.This bonus episode includes excerpts from:Ezra Klein on parenting yourself as well as your children (from episode #157)Holden Karnofsky on freezing embryos and being surprised by how fun it is to have a kid (#110 and #158)Parenting expert Emily Oster on how having kids affect relationships, careers and kids, and what actually makes a difference in young kids’ lives (#178)Russ Roberts on empirical research when deciding whether to have kids (#87)Spencer Greenberg on his surveys of parents (#183)Elie Hassenfeld on how having children reframes his relationship to solving pressing global problems (#153)Bryan Caplan on homeschooling (#172)Nita Farahany on thinking about life and the world differently with kids (#174)Chapters:Cold open (00:00:00)Rob & Luisa’s intro (00:00:19)Ezra Klein on parenting yourself as well as your children (00:03:34)Holden Karnofsky on preparing for a kid and freezing embryos (00:07:41)Emily Oster on the impact of kids on relationships (00:09:22)Russ Roberts on empirical research when deciding whether to have kids (00:14:44)Spencer Greenberg on parent surveys (00:23:58)Elie Hassenfeld on how having children reframes his relationship to solving pressing problems (00:27:40)Emily Oster on careers and kids (00:31:44)Holden Karnofsky on the experience of having kids (00:38:44)Bryan Caplan on homeschooling (00:40:30)Emily Oster on what actually makes a difference in young kids' lives (00:46:02)Nita Farahany on thinking about life and the world differently (00:51:16)Rob’s first impressions of parenthood (00:52:59)How Rob has changed his views about parenthood (00:58:04)Can the pros and cons of parenthood be studied? (01:01:49)Do people have skewed impressions of what parenthood is like? (01:09:24)Work and parenting tradeoffs (01:15:26)Tough decisions about screen time (01:25:11)Rob’s advice to future parents (01:30:04)Coda: Rob’s updated experience at nine months (01:32:09)Emily Oster on her amazing nanny (01:35:01)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
"In that famous example of the dress, half of the people in the world saw [blue and black], half saw [white and gold]. It turns out there’s individual differences in how brains take into account ambient light. Colour is one example where it’s pretty clear that what we experience is a kind of inference: it’s the brain’s best guess about what’s going on in some way out there in the world. And that’s the claim that I’ve taken on board as a general hypothesis for consciousness: that all our perceptual experiences are inferences about something we don’t and cannot have direct access to." —Anil SethIn today’s episode, host Luisa Rodriguez speaks to Anil Seth — director of the Sussex Centre for Consciousness Science — about how much we can learn about consciousness by studying the brain.Links to learn more, highlights, and full transcript.They cover:What groundbreaking studies with split-brain patients and blindsight have already taught us about the nature of consciousness.Anil’s theory that our perception is a “controlled hallucination” generated by our predictive brains.Whether looking for the parts of the brain that correlate with consciousness is the right way to learn about what consciousness is.Whether our theories of human consciousness can be applied to nonhuman animals.Anil’s thoughts on whether machines could ever be conscious.Disagreements and open questions in the field of consciousness studies, and what areas Anil is most excited to explore next.And much more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:02)The interview begins (00:02:42)How expectations and perception affect consciousness (00:03:05)How the brain makes sense of the body it’s within (00:21:33)Psychedelics and predictive processing (00:32:06)Blindsight and visual consciousness (00:36:45)Split-brain patients (00:54:56)Overflow experiments (01:05:28)How much can we learn about consciousness from empirical research? (01:14:23)Which parts of the brain are responsible for conscious experiences? (01:27:37)Current state and disagreements in the study of consciousness (01:38:36)Digital consciousness (01:55:55)Consciousness in nonhuman animals (02:18:11)What’s next for Anil (02:30:18)Luisa’s outro (02:32:46)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
If you care about social impact, is voting important? In this piece, Rob investigates the two key things that determine the impact of your vote:The chances of your vote changing an election’s outcome.How much better some candidates are for the world as a whole, compared to others.He then discusses a couple of the best arguments against voting in important elections, namely:If an election is competitive, that means other people disagree about which option is better, and you’re at some risk of voting for the worse candidate by mistake.While voting itself doesn’t take long, knowing enough to accurately pick which candidate is better for the world actually does take substantial effort — effort that could be better allocated elsewhere.Finally, Rob covers the impact of donating to campaigns or working to "get out the vote," which can be effective ways to generate additional votes for your preferred candidate.We last released this article in October 2020, but we think it largely still stands up today.Chapters:Rob's intro (00:00:00)Introduction (00:01:12)What's coming up (00:02:35)The probability of one vote changing an election (00:03:58)How much does it matter who wins? (00:09:29)What if you’re wrong? (00:16:38)Is deciding how to vote too much effort? (00:21:47)How much does it cost to drive one extra vote? (00:25:13)Overall, is it altruistic to vote? (00:29:38)Rob's outro (00:31:19)Producer: Keiran Harris
"You have a tank split in two parts: if the fish gets in the compartment with a red circle, it will receive food, and food will be delivered in the other tank as well. If the fish takes the blue triangle, this fish will receive food, but nothing will be delivered in the other tank. So we have a prosocial choice and antisocial choice. When there is no one in the other part of the tank, the male is choosing randomly. If there is a male, a possible rival: antisocial — almost 100% of the time. Now, if there is his wife — his female, this is a prosocial choice all the time."And now a question: Is it just because this is a female or is it just for their female? Well, when they're bringing a new female, it’s the antisocial choice all the time. Now, if there is not the female of the male, it will depend on how long he's been separated from his female. At first it will be antisocial, and after a while he will start to switch to prosocial choices." —Sébastien MoroIn today’s episode, host Luisa Rodriguez speaks to science writer and video blogger Sébastien Moro about the latest research on fish consciousness, intelligence, and potential sentience.Links to learn more, highlights, and full transcript.They cover:The insane capabilities of fish in tests of memory, learning, and problem-solving.Examples of fish that can beat primates on cognitive tests and recognise individual human faces.Fishes’ social lives, including pair bonding, “personalities,” cooperation, and cultural transmission.Whether fish can experience emotions, and how this is even studied.The wild evolutionary innovations of fish, who adapted to thrive in diverse environments from mangroves to the deep sea.How some fish have sensory capabilities we can’t even really fathom — like “seeing” electrical fields and colours we can’t perceive.Ethical issues raised by evidence that fish may be conscious and experience suffering.And plenty more.Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
Rob Wiblin speaks with FiveThirtyEight election forecaster and author Nate Silver about his new book: On the Edge: The Art of Risking Everything.Links to learn more, highlights, video, and full transcript.On the Edge explores a cultural grouping Nate dubs “the River” — made up of people who are analytical, competitive, quantitatively minded, risk-taking, and willing to be contrarian. It’s a tendency he considers himself a part of, and the River has been doing well for itself in recent decades — gaining cultural influence through success in finance, technology, gambling, philanthropy, and politics, among other pursuits.But on Nate’s telling, it’s a group particularly vulnerable to oversimplification and hubris. Where Riverians’ ability to calculate the “expected value” of actions isn’t as good as they believe, their poorly calculated bets can leave a trail of destruction — aptly demonstrated by Nate’s discussion of the extended time he spent with FTX CEO Sam Bankman-Fried before and after his downfall.Given this show’s focus on the world’s most pressing problems and how to solve them, we narrow in on Nate’s discussion of effective altruism (EA), which has been little covered elsewhere. Nate met many leaders and members of the EA community in researching the book and has watched its evolution online for many years.Effective altruism is the River style of doing good, because of its willingness to buck both fashion and common sense — making its giving decisions based on mathematical calculations and analytical arguments with the goal of maximising an outcome.Nate sees a lot to admire in this, but the book paints a mixed picture in which effective altruism is arguably too trusting, too utilitarian, too selfless, and too reckless at some times, while too image-conscious at others.But while everything has arguable weaknesses, could Nate actually do any better in practice? We ask him:How would Nate spend $10 billion differently than today’s philanthropists influenced by EA?Is anyone else competitive with EA in terms of impact per dollar?Does he have any big disagreements with 80,000 Hours’ advice on how to have impact?Is EA too big a tent to function?What global problems could EA be ignoring?Should EA be more willing to court controversy?Does EA’s niceness leave it vulnerable to exploitation?What moral philosophy would he have modelled EA on?Rob and Nate also talk about:Nate’s theory of Sam Bankman-Fried’s psychology.Whether we had to “raise or fold” on COVID.Whether Sam Altman and Sam Bankman-Fried are structurally similar cases or not.“Winners’ tilt.”Whether it’s selfish to slow down AI progress.The ridiculous 13 Keys to the White House.Whether prediction markets are now overrated.Whether venture capitalists talk a big talk about risk while pushing all the risk off onto the entrepreneurs they fund.And plenty more.Chapters:Cold open (00:00:00)Rob's intro (00:01:03)The interview begins (00:03:08)Sam Bankman-Fried and trust in the effective altruism community (00:04:09)Expected value (00:19:06)Similarities and differences between Sam Altman and SBF (00:24:45)How would Nate do EA differently? (00:31:54)Reservations about utilitarianism (00:44:37)Game theory equilibrium (00:48:51)Differences between EA culture and rationalist culture (00:52:55)What would Nate do with $10 billion to donate? (00:57:07)COVID strategies and tradeoffs (01:06:52)Is it selfish to slow down AI progress? (01:10:02)Democratic legitimacy of AI progress (01:18:33)Dubious election forecasting (01:22:40)Assessing how reliable election forecasting models are (01:29:58)Are prediction markets overrated? (01:41:01)Venture capitalists and risk (01:48:48)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore
"In the human case, it would be mistaken to give a kind of hour-by-hour accounting. You know, 'I had +4 level of experience for this hour, then I had -2 for the next hour, and then I had -1' — and you sort of sum to try to work out the total… And I came to think that something like that will be applicable in some of the animal cases as well… There are achievements, there are experiences, there are things that can be done in the face of difficulty that might be seen as having the same kind of redemptive role, as casting into a different light the difficult events that led up to it."The example I use is watching some birds successfully raising some young, fighting off a couple of rather aggressive parrots of another species that wanted to fight them, prevailing against difficult odds — and doing so in a way that was so wholly successful. It seemed to me that if you wanted to do an accounting of how things had gone for those birds, you would not want to do the naive thing of just counting up difficult and less-difficult hours. There’s something special about what’s achieved at the end of that process." —Peter Godfrey-SmithIn today’s episode, host Luisa Rodriguez speaks to Peter Godfrey-Smith — bestselling author and science philosopher — about his new book, Living on Earth: Forests, Corals, Consciousness, and the Making of the World.Links to learn more, highlights, and full transcript.They cover:Why octopuses and dolphins haven’t developed complex civilisation despite their intelligence.How the role of culture has been crucial in enabling human technological progress.Why Peter thinks the evolutionary transition from sea to land was key to enabling human-like intelligence — and why we should expect to see that in extraterrestrial life too.Whether Peter thinks wild animals’ lives are, on balance, good or bad, and when, if ever, we should intervene in their lives.Whether we can and should avoid death by uploading human minds.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:00:57)The interview begins (00:02:12)Wild animal suffering and rewilding (00:04:09)Thinking about death (00:32:50)Uploads of ourselves (00:38:04)Culture and how minds make things happen (00:54:05)Challenges for water-based animals (01:01:37)The importance of sea-to-land transitions in animal life (01:10:09)Luisa's outro (01:23:43)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
In this episode from our second show, 80k After Hours, Luisa Rodriguez and Keiran Harris chat about the consequences of letting go of enduring guilt, shame, anger, and pride.Links to learn more, highlights, and full transcript.They cover:Keiran’s views on free will, and how he came to hold themWhat it’s like not experiencing sustained guilt, shame, and angerWhether Luisa would become a worse person if she felt less guilt and shame — specifically whether she’d work fewer hours, or donate less money, or become a worse friendWhether giving up guilt and shame also means giving up prideThe implications for loveThe neurological condition ‘Jerk Syndrome’And some practical advice on feeling less guilt, shame, and angerWho this episode is for:People sympathetic to the idea that free will is an illusionPeople who experience tons of guilt, shame, or angerPeople worried about what would happen if they stopped feeling tonnes of guilt, shame, or angerWho this episode isn’t for:People strongly in favour of retributive justicePhilosophers who can’t stand random non-philosophers talking about philosophyNon-philosophers who can’t stand random non-philosophers talking about philosophyChapters:Cold open (00:00:00)Luisa's intro (00:01:16)The chat begins (00:03:15)Keiran's origin story (00:06:30)Charles Whitman (00:11:00)Luisa's origin story (00:16:41)It's unlucky to be a bad person (00:19:57)Doubts about whether free will is an illusion (00:23:09)Acting this way just for other people (00:34:57)Feeling shame over not working enough (00:37:26)First person / third person distinction (00:39:42)Would Luisa become a worse person if she felt less guilt? (00:44:09)Feeling bad about not being a different person (00:48:18)Would Luisa donate less money? (00:55:14)Would Luisa become a worse friend? (01:01:07)Pride (01:08:02)Love (01:15:35)Bears and hurricanes (01:19:53)Jerk Syndrome (01:24:24)Keiran's outro (01:34:47)Get more episodes like this by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type "80k After Hours" into your podcasting app. Producer: Keiran HarrisAudio mastering: Milo McGuireTranscriptions: Katy Moore
"For every far-out idea that turns out to be true, there were probably hundreds that were simply crackpot ideas. In general, [science] advances building on the knowledge we have, and seeing what the next questions are, and then getting to the next stage and the next stage and so on. And occasionally there’ll be revolutionary ideas which will really completely change your view of science. And it is possible that some revolutionary breakthrough in our understanding will come about and we might crack this problem, but there’s no evidence for that. It doesn’t mean that there isn’t a lot of promising work going on. There are many legitimate areas which could lead to real improvements in health in old age. So I’m fairly balanced: I think there are promising areas, but there’s a lot of work to be done to see which area is going to be promising, and what the risks are, and how to make them work." —Venki RamakrishnanIn today’s episode, host Luisa Rodriguez speaks to Venki Ramakrishnan — molecular biologist and Nobel Prize winner — about his new book, Why We Die: The New Science of Aging and the Quest for Immortality.Links to learn more, highlights, and full transcript.They cover:What we can learn about extending human lifespan — if anything — from “immortal” aquatic animal species, cloned sheep, and the oldest people to have ever lived.Which areas of anti-ageing research seem most promising to Venki — including caloric restriction, removing senescent cells, cellular reprogramming, and Yamanaka factors — and which Venki thinks are overhyped.Why eliminating major age-related diseases might only extend average lifespan by 15 years.The social impacts of extending healthspan or lifespan in an ageing population — including the potential danger of massively increasing inequality if some people can access life-extension interventions while others can’t.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:04)The interview begins (00:02:21)Reasons to explore why we age and die (00:02:35)Evolutionary pressures and animals that don't biologically age (00:06:55)Why does ageing cause us to die? (00:12:24)Is there a hard limit to the human lifespan? (00:17:11)Evolutionary tradeoffs between fitness and longevity (00:21:01)How ageing resets with every generation, and what we can learn from clones (00:23:48)Younger blood (00:31:20)Freezing cells, organs, and bodies (00:36:47)Are the goals of anti-ageing research even realistic? (00:43:44)Dementia (00:49:52)Senescence (01:01:58)Caloric restriction and metabolic pathways (01:11:45)Yamanaka factors (01:34:07)Cancer (01:47:44)Mitochondrial dysfunction (01:58:40)Population effects of extended lifespan (02:06:12)Could increased longevity increase inequality? (02:11:48)What’s surprised Venki about this research (02:16:06)Luisa's outro (02:19:26)Producer: Keiran HarrisAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
"Perception is quite difficult with cameras: even if you have a stereo camera, you still can’t really build a map of where everything is in space. It’s just very difficult. And I know that sounds surprising, because humans are very good at this. In fact, even with one eye, we can navigate and we can clear the dinner table. But it seems that we’re building in a lot of understanding and intuition about what’s happening in the world and where objects are and how they behave. For robots, it’s very difficult to get a perfectly accurate model of the world and where things are. So if you’re going to go manipulate or grasp an object, a small error in that position will maybe have your robot crash into the object, a delicate wine glass, and probably break it. So the perception and the control are both problems." —Ken GoldbergIn today’s episode, host Luisa Rodriguez speaks to Ken Goldberg — robotics professor at UC Berkeley — about the major research challenges still ahead before robots become broadly integrated into our homes and societies.Links to learn more, highlights, and full transcript.They cover:Why training robots is harder than training large language models like ChatGPT.The biggest engineering challenges that still remain before robots can be widely useful in the real world.The sectors where Ken thinks robots will be most useful in the coming decades — like homecare, agriculture, and medicine.Whether we should be worried about robot labour affecting human employment.Recent breakthroughs in robotics, and what cutting-edge robots can do today.Ken’s work as an artist, where he explores the complex relationship between humans and technology.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:19)General purpose robots and the “robotics bubble” (00:03:11)How training robots is different than training large language models (00:14:01)What can robots do today? (00:34:35)Challenges for progress: fault tolerance, multidimensionality, and perception (00:41:00)Recent breakthroughs in robotics (00:52:32)Barriers to making better robots: hardware, software, and physics (01:03:13)Future robots in home care, logistics, food production, and medicine (01:16:35)How might robot labour affect the job market? (01:44:27)Robotics and art (01:51:28)Luisa's outro (02:00:55)Producer: Keiran HarrisAudio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon MonsourContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
"It’s very hard to find examples where people say, 'I’m starting from this point. I’m starting from this belief.' So we wanted to make that very legible to people. We wanted to say, 'Experts think this; accurate forecasters think this.' They might both be wrong, but we can at least start from here and figure out where we’re coming into a discussion and say, 'I am much less concerned than the people in this report; or I am much more concerned, and I think people in this report were missing major things.' But if you don’t have a reference set of probabilities, I think it becomes much harder to talk about disagreement in policy debates in a space that’s so complicated like this." —Ezra KargerIn today’s episode, host Luisa Rodriguez speaks to Ezra Karger — research director at the Forecasting Research Institute — about FRI’s recent Existential Risk Persuasion Tournament to come up with estimates of a range of catastrophic risks.Links to learn more, highlights, and full transcript.They cover:How forecasting can improve our understanding of long-term catastrophic risks from things like AI, nuclear war, pandemics, and climate change.What the Existential Risk Persuasion Tournament (XPT) is, how it was set up, and the results.The challenges of predicting low-probability, high-impact events.Why superforecasters’ estimates of catastrophic risks seem so much lower than experts’, and which group Ezra puts the most weight on.The specific underlying disagreements that superforecasters and experts had about how likely catastrophic risks from AI are.Why Ezra thinks forecasting tournaments can help build consensus on complex topics, and what he wants to do differently in future tournaments and studies.Recent advances in the science of forecasting and the areas Ezra is most excited about exploring next.Whether large language models could help or outperform human forecasters.How people can improve their calibration and start making better forecasts personally.Why Ezra thinks high-quality forecasts are relevant to policymakers, and whether they can really improve decision-making.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:07)The interview begins (00:02:54)The Existential Risk Persuasion Tournament (00:05:13)Why is this project important? (00:12:34)How was the tournament set up? (00:17:54)Results from the tournament (00:22:38)Risk from artificial intelligence (00:30:59)How to think about these numbers (00:46:50)Should we trust experts or superforecasters more? (00:49:16)The effect of debate and persuasion (01:02:10)Forecasts from the general public (01:08:33)How can we improve people’s forecasts? (01:18:59)Incentives and recruitment (01:26:30)Criticisms of the tournament (01:33:51)AI adversarial collaboration (01:46:20)Hypotheses about stark differences in views of AI risk (01:51:41)Cruxes and different worldviews (02:17:15)Ezra’s experience as a superforecaster (02:28:57)Forecasting as a research field (02:31:00)Can large language models help or outperform human forecasters? (02:35:01)Is forecasting valuable in the real world? (02:39:11)Ezra’s book recommendations (02:45:29)Luisa's outro (02:47:54)Producer: Keiran HarrisAudio engineering: Dominic Armstrong, Ben Cordell, Milo McGuire, and Simon MonsourContent editing: Luisa Rodriguez, Katy Moore, and Keiran HarrisTranscriptions: Katy Moore
"I do think that there is a really significant sentiment among parts of the opposition that it’s not really just that this bill itself is that bad or extreme — when you really drill into it, it feels like one of those things where you read it and it’s like, 'This is the thing that everyone is screaming about?' I think it’s a pretty modest bill in a lot of ways, but I think part of what they are thinking is that this is the first step to shutting down AI development. Or that if California does this, then lots of other states are going to do it, and we need to really slam the door shut on model-level regulation or else they’re just going to keep going. "I think that is like a lot of what the sentiment here is: it’s less about, in some ways, the details of this specific bill, and more about the sense that they want this to stop here, and they’re worried that if they give an inch that there will continue to be other things in the future. And I don’t think that is going to be tolerable to the public in the long run. I think it’s a bad choice, but I think that is the calculus that they are making." —Nathan CalvinIn today’s episode, host Luisa Rodriguez speaks to Nathan Calvin — senior policy counsel at the Center for AI Safety Action Fund — about the new AI safety bill in California, SB 1047, which he’s helped shape as it’s moved through the state legislature.Links to learn more, highlights, and full transcript.They cover:What’s actually in SB 1047, and which AI models it would apply to.The most common objections to the bill — including how it could affect competition, startups, open source models, and US national security — and which of these objections Nathan thinks hold water.What Nathan sees as the biggest misunderstandings about the bill that get in the way of good public discourse about it.Why some AI companies are opposed to SB 1047, despite claiming that they want the industry to be regulated.How the bill is different from Biden’s executive order on AI and voluntary commitments made by AI companies.Why California is taking state-level action rather than waiting for federal regulation.How state-level regulations can be hugely impactful at national and global scales, and how listeners could get involved in state-level work to make a real difference on lots of pressing problems.And plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:00:57)The interview begins (00:02:30)What risks from AI does SB 1047 try to address? (00:03:10)Supporters and critics of the bill (00:11:03)Misunderstandings about the bill (00:24:07)Competition, open source, and liability concerns (00:30:56)Model size thresholds (00:46:24)How is SB 1047 different from the executive order? (00:55:36)Objections Nathan is sympathetic to (00:58:31)Current status of the bill (01:02:57)How can listeners get involved in work like this? (01:05:00)Luisa's outro (01:11:52)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"This is a group of animals I think people are particularly unfamiliar with. They are especially poorly covered in our science curriculum; they are especially poorly understood, because people don’t spend as much time learning about them at museums; and they’re just harder to spend time with in a lot of ways, I think, for people. So people have pets that are vertebrates that they take care of across the taxonomic groups, and people get familiar with those from going to zoos and watching their behaviours there, and watching nature documentaries and more. But I think the insects are still really underappreciated, and that means that our intuitions are probably more likely to be wrong than with those other groups." —Meghan BarrettIn today’s episode, host Luisa Rodriguez speaks to Meghan Barrett — insect neurobiologist and physiologist at Indiana University Indianapolis and founding director of the Insect Welfare Research Society — about her work to understand insects’ potential capacity for suffering, and what that might mean for how humans currently farm and use insects. If you're interested in getting involved with this work, check out Meghan's recent blog post: I’m into insect welfare! What’s next?Links to learn more, highlights, and full transcript.They cover:The scale of potential insect suffering in the wild, on farms, and in labs.Examples from cutting-edge insect research, like how depression- and anxiety-like states can be induced in fruit flies and successfully treated with human antidepressants.How size bias might help explain why many people assume insects can’t feel pain.Practical solutions that Meghan’s team is working on to improve farmed insect welfare, such as standard operating procedures for more humane slaughter methods.Challenges facing the nascent field of insect welfare research, and where the main research gaps are.Meghan’s personal story of how she went from being sceptical of insect pain to working as an insect welfare scientist, and her advice for others who want to improve the lives of insects.And much more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:02)The interview begins (00:03:06)What is an insect? (00:03:22)Size diversity (00:07:24)How important is brain size for sentience? (00:11:27)Offspring, parental investment, and lifespan (00:19:00)Cognition and behaviour (00:23:23)The scale of insect suffering (00:27:01)Capacity to suffer (00:35:56)The empirical evidence for whether insects can feel pain (00:47:18)Nociceptors (01:00:02)Integrated nociception (01:08:39)Response to analgesia (01:16:17)Analgesia preference (01:25:57)Flexible self-protective behaviour (01:31:19)Motivational tradeoffs and associative learning (01:38:45)Results (01:43:31)Reasons to be sceptical (01:47:18)Meghan’s probability of sentience in insects (02:10:20)Views of the broader entomologist community (02:18:18)Insect farming (02:26:52)How much to worry about insect farming (02:40:56)Inhumane slaughter and disease in insect farms (02:44:45)Inadequate nutrition, density, and photophobia (02:53:50)Most humane ways to kill insects at home (03:01:33)Challenges in researching this (03:07:53)Most promising reforms (03:18:44)Why Meghan is hopeful about working with the industry (03:22:17)Careers (03:34:08)Insect Welfare Research Society (03:37:16)Luisa's outro (03:47:01)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
The three biggest AI companies — Anthropic, OpenAI, and DeepMind — have now all released policies designed to make their AI models less likely to go rogue or cause catastrophic damage as they approach, and eventually exceed, human capabilities. Are they good enough?That’s what host Rob Wiblin tries to hash out in this interview (recorded May 30) with Nick Joseph — one of the original cofounders of Anthropic, its current head of training, and a big fan of Anthropic’s “responsible scaling policy” (or “RSP”). Anthropic is the most safety focused of the AI companies, known for a culture that treats the risks of its work as deadly serious.Links to learn more, highlights, video, and full transcript.As Nick explains, these scaling policies commit companies to dig into what new dangerous things a model can do — after it’s trained, but before it’s in wide use. The companies then promise to put in place safeguards they think are sufficient to tackle those capabilities before availability is extended further. For instance, if a model could significantly help design a deadly bioweapon, then its weights need to be properly secured so they can’t be stolen by terrorists interested in using it that way.As capabilities grow further — for example, if testing shows that a model could exfiltrate itself and spread autonomously in the wild — then new measures would need to be put in place to make that impossible, or demonstrate that such a goal can never arise.Nick points out what he sees as the biggest virtues of the RSP approach, and then Rob pushes him on some of the best objections he’s found to RSPs being up to the task of keeping AI safe and beneficial. The two also discuss whether it's essential to eventually hand over operation of responsible scaling policies to external auditors or regulatory bodies, if those policies are going to be able to hold up against the intense commercial pressures that might end up arrayed against them.In addition to all of that, Nick and Rob talk about:What Nick thinks are the current bottlenecks in AI progress: people and time (rather than data or compute).What it’s like working in AI safety research at the leading edge, and whether pushing forward capabilities (even in the name of safety) is a good idea.What it’s like working at Anthropic, and how to get the skills needed to help with the safe development of AI.And as a reminder, if you want to let us know your reaction to this interview, or send any other feedback, our inbox is always open at podcast@80000hours.org.Chapters:Cold open (00:00:00)Rob’s intro (00:01:00)The interview begins (00:03:44)Scaling laws (00:04:12)Bottlenecks to further progress in making AIs helpful (00:08:36)Anthropic’s responsible scaling policies (00:14:21)Pros and cons of the RSP approach for AI safety (00:34:09)Alternatives to RSPs (00:46:44)Is an internal audit really the best approach? (00:51:56)Making promises about things that are currently technically impossible (01:07:54)Nick’s biggest reservations about the RSP approach (01:16:05)Communicating “acceptable” risk (01:19:27)Should Anthropic’s RSP have wider safety buffers? (01:26:13)Other impacts on society and future work on RSPs (01:34:01)Working at Anthropic (01:36:28)Engineering vs research (01:41:04)AI safety roles at Anthropic (01:48:31)Should concerned people be willing to take capabilities roles? (01:58:20)Recent safety work at Anthropic (02:10:05)Anthropic culture (02:14:35)Overrated and underrated AI applications (02:22:06)Rob’s outro (02:26:36)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongVideo engineering: Simon MonsourTranscriptions: Katy Moore
"In the 1980s, it was still apparently common to perform surgery on newborn babies without anaesthetic on both sides of the Atlantic. This led to appalling cases, and to public outcry, and to campaigns to change clinical practice. And as soon as [some courageous scientists] looked for evidence, it showed that this practice was completely indefensible and then the clinical practice was changed. People don’t need convincing anymore that we should take newborn human babies seriously as sentience candidates. But the tale is a useful cautionary tale, because it shows you how deep that overconfidence can run and how problematic it can be. It just underlines this point that overconfidence about sentience is everywhere and is dangerous." —Jonathan BirchIn today’s episode, host Luisa Rodriguez speaks to Dr Jonathan Birch — philosophy professor at the London School of Economics — about his new book, The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. (Check out the free PDF version!)Links to learn more, highlights, and full transcript.They cover:Candidates for sentience, such as humans with consciousness disorders, foetuses, neural organoids, invertebrates, and AIsHumanity’s history of acting as if we’re sure that such beings are incapable of having subjective experiences — and why Jonathan thinks that that certainty is completely unjustified.Chilling tales about overconfident policies that probably caused significant suffering for decades.How policymakers can act ethically given real uncertainty.Whether simulating the brain of the roundworm C. elegans or Drosophila (aka fruit flies) would create minds equally sentient to the biological versions.How new technologies like brain organoids could replace animal testing, and how big the risk is that they could be sentient too.Why Jonathan is so excited about citizens’ assemblies.Jonathan’s conversation with the Dalai Lama about whether insects are sentient.And plenty more.Chapters:Cold open (00:00:00)Luisa’s intro (00:01:20)The interview begins (00:03:04)Why does sentience matter? (00:03:31)Inescapable uncertainty about other minds (00:05:43)The “zone of reasonable disagreement” in sentience research (00:10:31)Disorders of consciousness: comas and minimally conscious states (00:17:06)Foetuses and the cautionary tale of newborn pain (00:43:23)Neural organoids (00:55:49)AI sentience and whole brain emulation (01:06:17)Policymaking at the edge of sentience (01:28:09)Citizens’ assemblies (01:31:13)The UK’s Sentience Act (01:39:45)Ways Jonathan has changed his mind (01:47:26)Careers (01:54:54)Discussing animal sentience with the Dalai Lama (01:59:08)Luisa’s outro (02:01:04)Producer and editor: Keiran HarrisAudio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella NevoIn today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.Links to learn more, highlights, and full transcript.They cover:Real-world examples of sophisticated security breaches, and what we can learn from them.Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.New security measures that Sella hopes can mitigate with the growing risks.Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.And plenty more.Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field! Chapters:Cold open (00:00:00)Luisa’s intro (00:00:56)The interview begins (00:02:30)The importance of securing the model weights of frontier AI models (00:03:01)The most sophisticated and surprising security breaches (00:10:22)AI models being leaked (00:25:52)Researching for the RAND report (00:30:11)Who tries to steal model weights? (00:32:21)Malicious code and exploiting zero-days (00:42:06)Human insiders (00:53:20)Side-channel attacks (01:04:11)Getting access to air-gapped networks (01:10:52)Model extraction (01:19:47)Reducing and hardening authorised access (01:38:52)Confidential computing (01:48:05)Red-teaming and security testing (01:53:42)Careers in information security (01:59:54)Sella’s work on flood forecasting systems (02:01:57)Luisa’s outro (02:04:51)Producer and editor: Keiran HarrisAudio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States
به چنل ما سر بزنیدhttps://www.youtube.com/watch?v=lD7J9avsFbw
More croaky neophytes who think they're the first people to set foot on every idea they have... then rush on to a podcast to preen.
@28:00: It's always impressive to hear how proud people are to rediscover things that have been researched, discussed, and known for centuries. Here, the guest stumbles through a case for ends justifying means. What could go wrong? This is like listening to intelligent but ignorant 8th graders... or perhaps 1st-yr grad students, who love to claim that a topic has never been studied before, especially if the old concept is wearing a new name.
@15:30: The guest is greatly over-stating binary processing and signalling in neural networks. This is not at all a good explanation.
Ezra Klein's voice is a mix of nasal congestion, lisp, up-talk, vocal fry, New York, and inflated ego.
Rob's suggestion on price-gouging seems pretty poorly considered. There are plenty of historical examples of harmful price-gouging and I can't think of any that were beneficial, particularly not after a disaster. This approach seems wrong economically and morally. Price-gouging after a disaster is almost always a pure windfall. It's economically infeasible to stockpile for very low-probability events, especially if transporting/ delivering the good is difficult. Even if the good can be mass-produced and delivered quickly in response to a demand spike, Rob would be advocating for a moral approach that runs against the grain of human moral intuitions in post-disaster settings. In such contexts, we prefer need-driven distributive justice and, secondarily, equality-based distributive justice. Conversely, Rob is suggesting an equity-based approach wherein the input-output ratio of equity is based on someone's socio-economic status, which is not just irrelevant to their actions in the em
@18:02: Oh really, Rob? Does correlation now imply causation or was journalistic coverage randomly selected and randomly assigned? Good grief.
She seems to be just a self-promoting aggregator. I didn't hear her say anything insightful. Even when pressed multiple times about how her interests pertain to the mission of 80,000 Hours, she just blathered out a few platitudes about the need for people think about things (or worse, thinking "around" issues).
@1:18:38: Lots of very sloppy thinking and careless wording. Many lazy false equivalences--e.g., @1:18:38: equating (a) democrats' fact-based complaints in 2016 (e.g., about foreign interference, the Electoral College), when Clinton conceded the following day and democrats reconciled themselves to Trump's presidency, with (b) republicans spreading bald-faced lies about stolen elections (only the ones they lost, of course) and actively trying to over-throw the election, including through force. If this was her effort to seem apolitical with a ham-handed "both sides do it... or probably will" comment, then she she isn't intelligent enough to have public platforms.
Is Rob's voice being played back at 1.5x? Is he hyped up on coke?
I'm sure Dr Ord is well-intentioned, but I find his arguments here exceptionally weak and thin. (Also, the uhs and ums are rather annoying after a while.)
So much vocal fry
Thank you, that was very inspirational!
A thought provoking, refreshing podcast