Discover80,000 Hours Podcast with Rob Wiblin
80,000 Hours Podcast with Rob Wiblin
Claim Ownership

80,000 Hours Podcast with Rob Wiblin

Author: The 80000 Hours team

Subscribed: 2,124Played: 128,863
Share

Description

A show about the world's most pressing problems and how you can use your career to solve them.

Subscribe by searching for '80,000 Hours' wherever you get podcasts.

Hosted by Rob Wiblin, Director of Research at 80,000 Hours.
71 Episodes
Reverse
It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.• Links to learn more, summary and full transcript.• 80,000 Hours Annual Review 2018.• How to donate to 80,000 Hours.In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions.According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case.In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss:• How much people misrepresent their views in democratic countries.• Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.• When is it justified to encourage your own group to polarise?• Sunstein's difficult experiences as a pioneer of animal rights law.• Whether activists can do better by spending half their resources on public opinion surveys.• Should people be more or less outspoken about their true views?• What might be the next social revolution to take off?• How can we learn about social movements that failed and disappeared?• How to find out what people really think.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript on our site.The 80,000 Hours Podcast is produced by Keiran Harris.
When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.Far from being an overhead on the 'real' work, it’s an essential part of making AI systems work at all. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether.• Want to be notified about high-impact opportunities to help ensure AI remains safe and beneficial? Tell us a bit about yourself and we’ll get in touch if an opportunity matches your background and interests.• Links to learn more, summary and full transcript.• And a few added thoughts on non-research roles.With the goal of designing systems that are reliably consistent with desired specifications, DeepMind have recently published work on important technical challenges for the machine learning community.For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an 'adversary' that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable. He's also looking into 'training specification-consistent models' and formal verification', while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards.In today’s interview, we focus on the convergence between broader AI research and robustness, as well as:• DeepMind’s work on the protein folding problem• Parallels between ML problems and past challenges in software development and computer security• How can you analyse the thinking of a neural network?• Unique challenges faced by DeepMind’s technical AGI safety team• How do you communicate with a non-human intelligence?• What are the biggest misunderstandings about AI safety and reliability?• Are there actually a lot of disagreements within the field?• The difficulty of forecasting AI developmentGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.The 80,000 Hours Podcast is produced by Keiran Harris.
This is a cross-post of some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m).Some of the content will be familiar to regular listeners — but if you’re at all interested in Rob’s personal thoughts, there should be quite a lot of new material to make listening worthwhile.The first interview is with Chad Grills. They focused largely on new technologies and existential risks, but also discuss topics like:• Why Rob is wary of fiction• Egalitarianism in the evolution of hunter gatherers• How to stop social media screwing up politics• Careers in government versus businessThe second interview is with Prof Andrew Leigh - the Shadow Assistant Treasurer in Australia. This one gets into more personal topics than we usually cover on the show, like: • What advice would Rob give to his teenage self?• Which person has most shaped Rob’s view of living an ethical life?• Rob’s approach to giving to the homeless• What does Rob do to maximise his own happiness?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as possible - before you quit or get kicked out?That was the challenge put in front of Tom Kalil in 1993.He had enough success to last a full 16 years inside the Clinton and Obama administrations, working to foster the development of the internet, then nanotechnology, and then cutting-edge brain modelling, among other things.But not everyone figures out how to move the needle. In today's interview, Tom shares his experience with how to increase your chances of getting an influential role in government, and how to make the most of the opportunity if you get in.Links to learn more, summary and full transcript.Interested in US AI policy careers? Apply for one-on-one career advice here.Vacancies at the Center for Security and Emerging Technology.Our high-impact job board, which features other related opportunities.He believes that Congressional gridlock leads people to greatly underestimate how much the Executive Branch can and does do on its own every day. Decisions by individuals change how billions of dollars are spent; regulations are enforced, and then suddenly they aren't; and a single sentence in the State of the Union can get civil servants to pay attention to a topic that would otherwise go ignored.Over years at the White House Office of Science and Technology Policy, 'Team Kalil' built up a white board of principles. For example, 'the schedule is your friend': setting a meeting date with the President can force people to finish something, where they otherwise might procrastinate.Or 'talk to who owns the paper'. People would wonder how Tom could get so many lines into the President's speeches. The answer was "figure out who's writing the speech, find them with the document, and tell them to add the line." Obvious, but not something most were doing.Not everything is a precise operation though. Tom also tells us the story of NetDay, a project that was put together at the last minute because the President incorrectly believed it was already organised – and decided he was going to announce it in person.In today's episode we get down to nuts & bolts, and discuss:• How did Tom spin work on a primary campaign into a job in the next White House?• Why does Tom think hiring is the most important work he did, and how did he decide who to bring onto the team?• How do you get people to do things when you don't have formal power over them?• What roles in the US government are most likely to help with the long-term future, or reducing existential risks?• Is it possible, or even desirable, to get the general public interested in abstract, long-term policy ideas?• What are 'policy entrepreneurs' and why do they matter?• What is the role for prizes in promoting science and technology? What are other promising policy ideas?• Why you can get more done by not taking credit.• What can the White House do if an agency isn't doing what it wants?• How can the effective altruism community improve the maturity of our policy recommendations?• How much can talented individuals accomplish during a short-term stay in government?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app..The 80,000 Hours Podcast is produced by Keiran Harris.
Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right?Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences. Many animals are hunted by predators, and constantly have to remain vigilant about the risk of being killed, and perhaps experiencing the horror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter animals freeze to death; in droughts they die of heat or thirst. There are fewer than 20 people in the world dedicating their lives to researching these problems.But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number could make the scale of the problem larger than most other near-term concerns.Links to learn more, summary and full transcript.Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can't survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death.But should we actually intervene? How do we know what animals are sentient? How often do animals feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could eventually allow us to massively improve wild animal welfare?For most of these big questions, the answer is: we don’t know. And Persis thinks we're far away from knowing enough to start interfering with ecosystems. But that's all the more reason to start looking at these questions. There are some concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours.In today’s interview we explore wild animal welfare as a new field of research, and discuss:• Do we have a moral duty towards wild animals or not?• How should we measure the number of wild animals?• What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate?• Is there a danger in imagining how we as humans would feel if we were put into their situation?• Should we eliminate parasites and predators?• How important are insects?• How strongly should we focus on just avoiding humans going in and making things worse?• How does this compare to work on farmed animal suffering?• The most compelling arguments for humanity not dedicating resources to wild animal welfare• Is there much of a case for the idea that this work could improve the very long-term future of humanity?Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss:• The importance of figuring out your values• Chemistry, psychology, and other different paths towards working on wild animal welfare• How to break into new fieldsGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
Governance matters. Policy change quickly took China from famine to fortune; Singapore from swamps to skyscrapers; and Hong Kong from fishing village to financial centre. Unfortunately, many governments are hard to reform and — to put it mildly — it's not easy to found a new country.This has prompted poverty-fighters and political dreamers to look for creative ways to get new and better 'pseudo-countries' off the ground. The poor could then voluntary migrate to in search of security and prosperity. And innovators would be free to experiment with new political and legal systems without having to impose their ideas on existing jurisdictions. The 'seasteading movement' imagined founding new self-governing cities on the sea, but obvious challenges have kept that one on the drawing board. Nobel Prize winner and World Bank President Paul Romer suggested 'charter cities', where a host country would volunteer for another country with better legal institutions to effectively govern some of its territory. But that idea too ran aground for political, practical and personal reasons.Now Mark Lutter and Tamara Winter, of The Center for Innovative Governance Research (CIGR), are reviving the idea of 'charter cities', with some modifications. Gone is the idea of transferring sovereignty. Instead these cities would look more like the 'special economic zones' that worked miracles for Taiwan and China among others. But rather than keep the rest of the country's rules with a few pieces removed, they hope to start from scratch, opting in to the laws they want to keep, in order to leap forward to "best practices in commercial law."Links to learn more, summary and full transcript.Rob on The Good Life: Andrew Leigh in Conversation — on 'making the most of your 80,000 hours'.The project has quickly gotten attention, with Mark and Tamara receiving funding from Tyler Cowen's Emergent Ventures (discussed in episode 45) and winning a Pioneer tournament.Starting afresh with a new city makes it possible to clear away thousands of harmful rules without having to fight each of the thousands of interest groups that will viciously defend their privileges. Initially the city can fund infrastructure and public services by gradually selling off its land, which appreciates as the city flourishes. And with 40 million people relocating to cities every year, there are plenty of prospective migrants.CIGR is fleshing out how these arrangements would work, advocating for them, and developing supporting services that make it easier for any jurisdiction to implement. They're currently in the process of influencing a new prospective satellite city in Zambia.Of course, one can raise many criticisms of this idea: Is it likely to be taken up? Is CIGR really doing the right things to make it happen? Will it really reduce poverty if it is?We discuss those questions, as well as:• How did Mark get a new organisation off the ground, with fundraising and other staff?• What made China's 'special economic zones' so successful?• What are the biggest challenges in getting new cities off the ground?• How did Mark find and hire Tamara? How did he know this was a good idea?• Should people care about this idea if they aren't focussed on tackling poverty?• Why aren't people already doing this?• Why does Tamara support more people starting families?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm.How is this possible and what does it show?In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems.A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy. Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map.When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space.Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it Links to learn more, summary and full transcriptThis is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software.The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy.Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2.We discuss:• What are the most significant changes in the AI policy world over the last year or two?• What capabilities are likely to develop over the next five, 10, 15, 20 years?• How much should we focus on the next couple of years, versus the next couple of decades?• How should we approach possible malicious uses of AI?• What are some of the potential ways OpenAI could make things worse, and how can they be avoided?• Publication norms for AI research• Where do we stand in terms of arms races between countries or different AI labs?• The case for creating newsletters• Should the AI community have a closer relationship to the military?• Working at OpenAI vs. working in the US government• How valuable is Twitter in the AI policy world?Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss:• The reaction to OpenAI's release of GPT-2• Jack’s critique of our US AI policy article• How valuable are roles in government?• Where do you start if you want to write content for a specific audience?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.The 80,000 Hours Podcast is produced by Keiran Harris.
“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets?Funded by the Rockefeller Foundation and given very little editorial direction, Vox's Future Perfect aspires to be more or less that.Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours' work.But according to Kelsey Piper, staff writer for this new section of Vox's website focused on effective altruist themes, Future Perfect's goal is to run in the opposite direction and make room for more substantive coverage that's not tied to the news cycle.They hope that in the long-term talented writers from other outlets across the political spectrum can also be attracted to tackle these topics.Links to learn more, summary and full transcript.Links to Kelsey's top articles.Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them.Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join "that weird Silicon Valley apocalypse thing"? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I'm glad you're working on it.”Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics.If that's right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world's most pressing problems.Kelsey points out that one needn't take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself.In today’s episode we discuss that path, as well as:• What’s the day to day life of a Vox journalist like?• How can good journalism get funded?• Are there meaningful tradeoffs between doing what's in the interest of Vox and doing what’s good?• How concerned should we be about the risk of effective altruism being perceived as partisan?• How well can short articles effectively communicate complicated ideas?• Are there alternative business models that could fund high quality journalism on a larger scale?• How do you approach the case for taking AI seriously to a broader audience?• How valuable might it be for media outlets to do Tetlock-style forecasting?• Is it really a good idea to heavily tax billionaires?• How do you avoid the pressure to get clicks?• How possible is it to predict which articles are going to be popular?• How did Kelsey build the skills necessary to work at Vox?• General lessons for people dealing with very difficult life circumstancesRob is then joined by two of his colleagues – Keiran Harris & Michelle Hutchinson – to quickly discuss:• The risk political polarisation poses to long-termist causes• How should specialists keep journalism available as a career option?• Should we create a news aggregator that aims to make someone as well informed as possible in big-picture terms?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
This is a cross-post of an interview Rob did with Julia Galef on her podcast Rationally Speaking. Rob and Julia discuss how the career advice 80,000 Hours gives has changed over the years, and the biggest misconceptions about our views.The topics will be familiar to the most fervent fans of this show — but we think that if you’ve listened to less than about half of the episodes we've released so far, you’ll find something new to enjoy here. Julia may be familiar to you as the guest on episode 7 of the show, way back in September 2017.The conversation also covers topics like:• How many people should try to get a job in finance and donate their income?• The case for working to reduce global catastrophic risks in targeted ways, and historical precedents for this kind of work• Why reducing risk is a better way to help the future than increasing economic growth• What percentage of the world should ideally follow 80,000 Hours advice?Links to learn more, summary and full transcript.If you’re interested in the cooling and expansion of the universe, which comes up on the show, you should definitely check out our 29th episode with Dr Anders Sandberg.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into any podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
Pro-market economists love to wax rhapsodic about the capacity of markets to pull together the valuable local information spread across all of society about what people want and how to make it.But when it comes to politics and voting - which also aim to aggregate the preferences and knowledge found in millions of individuals - the enthusiasm for finding clever institutional designs often turns to skepticism.Today's guest, freewheeling economist Glen Weyl, won't have it, and is on a warpath to reform liberal democratic institutions in order to save them. Just last year he wrote Radical Markets: Uprooting Capitalism and Democracy for a Just Society with Eric Posner, but has already moved on, saying "in the 6 months since the book came out I've made more intellectual progress than in the whole 10 years before that."Weyl believes we desperately need more efficient, equitable and decentralised ways to organise society, that take advantage of what each person knows, and his research agenda has already been making breakthroughs.Links to learn more, summary and full transcriptOur high impact job boardJoin our newsletterDespite a history in the best economics departments in the world - Harvard, Princeton, Yale and the University of Chicago - he is too worried for the future to sit in his office writing papers. Instead he has left the academy to try to inspire a social movement, RadicalxChange, with a vision of social reform as expansive as his own.You can sign up for their conference in Detroit in March hereEconomist Alex Tabarrok called his latest proposal, known as 'liberal radicalism', "a quantum leap in public-goods mechanism-design" - we explain how it works in the show. But the proposal, however good in theory, might struggle in the real world because it requires large subsidies, and compensates for people's selfishness so effectively that it might even be an overcorrection.An earlier mechanism - 'quadratic voting' (QV) - would allow people to express the relative strength of their preferences in the democratic process. No longer would 51 people who support a proposal, but barely care about the issue, outvote 49 incredibly passionate opponents, predictably making society worse in the process. We explain exactly how in the episode.Weyl points to studies showing that people are more likely to vote strongly not only about issues they *care* more about, but issues they *know* more about. He expects that allowing people to specialise and indicate when they know what they're talking about will create a democracy that does more to aggregate careful judgement, rather than just passionate ignorance.But these and indeed all of Weyl's ideas have faced criticism. Some say the risk of unintended consequences is too great, or that they solve the wrong problem. Others see these proposals as unproven, impractical, or just another example of an intellectual engaged in grand social planning. I raise these concerns to see how he responds.As big a topic as all of that is, this extended conversation also goes into the blockchain, problems with the effective altruism community and how auctions could replace private property. Don't miss it.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
loading
Comments (1)

Sam McMahon

A thought provoking, refreshing podcast

Oct 26th
Reply
loading
Download from Google Play
Download from App Store