Discover80,000 Hours Podcast with Rob Wiblin
80,000 Hours Podcast with Rob Wiblin
Claim Ownership

80,000 Hours Podcast with Rob Wiblin

Author: The 80000 Hours team

Subscribed: 2,644Played: 145,445
Share

Description

A show about the world's most pressing problems and how you can use your career to solve them.

Subscribe by searching for '80,000 Hours' wherever you get podcasts.

Hosted by Rob Wiblin, Director of Research at 80,000 Hours.
80 Episodes
Reverse
In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off. According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention. But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times. • Singer's book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free e-book and audiobook, read by a range of celebrities. Get it here.• Links to learn more, summary and full transcript.Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one? Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences, but Singer says that he gives public relations considerations plenty of thought. One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump. Another is the focus of the effective altruism community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement. He suspects there's a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns. Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover: • What does he think is the most plausible alternatives to consequentialism? • Is it more humane to eat wild caught animals than farmed animals? • The re-release of The Life You Can Save • His most and least strategic career decisions • Population ethics, and other arguments for and against prioritising the long-term future • What led to his changing his mind on significant questions in moral philosophy? • And more. In the post-episode discussion, Rob and Arden continue talking about: • The pros and cons of keeping EA as one big movement • Singer’s thoughts on immigration• And consequentialism with side constraints.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq. Illustration of Singer: Matthias Seifarth.
"…it started when the Soviet Union fell apart and there was a real desire to ensure security of nuclear materials and pathogens, and that scientists with [WMD-related] knowledge could get paid so that they wouldn't go to countries and sell that knowledge." Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security. Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. In 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS). But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation. In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries. • Links to learn more, summary and full transcript.• Talks from over 100 other speakers at EA Global.• Having trouble with podcast 'chapters' on this episode? Please report any problems to keiran at 80000hours dot org. What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next. Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the previous one. Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda discussed at length in episode 27. Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the 9/11 Commission. Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11. And as if that all weren't curious enough four years ago Bonnie decided to go vegan. We talk about her work so far as well as: • How listeners can start a career like hers • Mistakes made by Mr Obama and Mr Trump • Networking, the value of attention, and being a vegan in DC • And 2020 Presidential candidates.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election. November 3 2020, 11:46PM: The NY Times and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don't see how they can figure it out. What on Earth happens next? Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result. Unfortunately the US has no recovery system for a situation like this, unlike Parliamentary democracies, which can just rerun the election a few weeks later.• Links to learn more, summary and full transcript.• Motivating article: Information security careers for global catastrophic risk reduction by Zabel and MuehlhauserThe constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker. Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn't fair. Schneier thinks there's a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage. And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits. According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they're designing, because they have a bureaucrat's rather than a hacker's mindset. The ideal computer security expert walks into a shop and thinks, "You know, here's how I would shoplift." They automatically see where the cameras are, whether there are alarms, and where the security guards aren't watching. In this episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn't get access to them.We also cover: • How can we have surveillance of dangerous actors, without falling back into authoritarianism? • When if ever should information about weaknesses in society's security be kept secret? • How secure are nuclear weapons systems around the world? • How worried should we be about deep-fakes? • Schneier’s critiques of blockchain technology • How technologists should be vital in shaping policy • What are the most consequential computer security problems today? • Could a career in information security be very useful for reducing global catastrophic risks? • And more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript.The 80,000 Hours Podcast is produced by Keiran Harris.
Today's episode is a compilation of interviews I recently recorded for two other shows, Love Your Work and The Neoliberal Podcast. If you've listened to absolutely everything on this podcast feed, you'll have heard four interviews with me already, but fortunately I don't think these two include much repetition, and I've gotten a decent amount of positive feedback on both. First up, I speak with David Kadavy on his show, Love Your Work. This is a particularly personal and relaxed interview. We talk about all sorts of things, including nicotine gum, plastic straw bans, whether recycling is important, how many lives a doctor saves, why interviews should go for at least 2 hours, how athletes doping could be good for the world, and many other fun topics. • Our annual impact survey is about to close — I'd really appreciate if you could take 3–10 minutes to fill it out now. • The blog post about this episode.At some points we even actually discuss effective altruism and 80,000 Hours, but you can easily skip through those bits if they feel too familiar. The second interview is with Jeremiah Johnson on the Neoliberal Podcast. It starts 2 hours and 15 minutes into this recording. Neoliberalism in the sense used by this show is not the free market fundamentalism you might associate with the term. Rather it's a centrist or even centre-left view that supports things like social liberalism, multilateral international institutions, trade, high rates of migration, racial justice, inclusive institutions, financial redistribution, prioritising the global poor, market urbanism, and environmental sustainability. This is the more demanding of the two conversations, as listeners to that show have already heard of effective altruism, so we were able to get the best arguments Jeremiah could offer against focusing on improving the long term future of the world. Jeremiah is more of a fan of donating to evidence-backed global health charities recommended by GiveWell, and does so himself. I appreciate him having done his homework and forcing me to do my best to explain how well my views can stand up to counterarguments. It was a challenge for me to paint the whole picture in the half an hour we spent on longterm and I expect there's answers in there which will be fresh even for regular listeners. I hope you enjoy both conversations! Feel free to email me with any feedback.The 80,000 Hours Podcast is produced by Keiran Harris.
1. Fill out our annual impact survey here. 2. Find a great vacancy on our job board. 3. Learn about our key ideas, and get links to our top articles. 4. Join our newsletter for an email about what's new, every 2 weeks or so. 5. Or follow our pages on Facebook and Twitter. —— Once a year 80,000 Hours runs a survey to find out whether we've helped our users have a larger social impact with their life and career. We and our donors need to know whether our services, like this podcast, are helping people enough to continue them or scale them up, and it's only by hearing from you that we can make these decisions in a sensible way. So, if 80,000 Hours' podcast, job board, articles, headhunting, advising or other projects have somehow contributed to your life or career plans, please take 3–10 minutes to let us know how. You can also let us know where we've fallen short, which helps us fix problems with what we're doing. We've refreshed the survey this year, hopefully making it easier to fill out than in the past. We'll keep this appeal up for about two weeks, but if you fill it out now that means you definitely won't forget! Thanks so much, and talk to you again in a normal episode soon. — RobTag for internal use: this RSS feed is originating in BackTracks.
Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communication between both law-abiding citizens and dangerous criminals. Could it have similarly significant consequences in future?Today's guest — Vitalik Buterin — is world-famous as the lead developer of Ethereum, a successor to the cryptographic-currency Bitcoin, which added the capacity for smart contracts and decentralised organisations. Buterin first proposed Ethereum at the age of 20, and by the age of 23 its success had likely made him a billionaire.At the same time, far from indulging hype about these so-called 'blockchain' technologies, he has been candid about the limited good accomplished by Bitcoin and other currencies developed using cryptographic tools — and the breakthroughs that will be needed before they can have a meaningful social impact. In his own words, *"blockchains as they currently exist are in many ways a joke, right?"*But Buterin is not just a realist. He's also an idealist, who has been helping to advance big ideas for new social institutions that might help people better coordinate to pursue their shared goals.Links to learn more, summary and full transcript.By combining theories in economics and mechanism design with advances in cryptography, he has been pioneering the new interdiscriplinary field of 'cryptoeconomics'. Economist Tyler Cowen hasobserved that, "at 25, Vitalik appears to repeatedly rediscover important economics results from famous papers, without knowing about the papers at all." Along with previous guest Glen Weyl, Buterin has helped develop a model for so-called 'quadratic funding', which in principle could transform the provision of 'public goods'. That is, goods that people benefit from whether they help pay for them or not.Examples of goods that are fully or partially 'public goods' include sound decision-making in government, international peace, scientific advances, disease control, the existence of smart journalism, preventing climate change, deflecting asteroids headed to Earth, and the elimination of suffering. Their underprovision in part reflects the difficulty of getting people to pay for anything when they can instead free-ride on the efforts of others. Anything that could reduce this failure of coordination might transform the world.But these and other related proposals face major hurdles. They're vulnerable to collusion, might be used to fund scams, and remain untested at a small scale — not to mention that anything with a square root sign in it is going to struggle to achieve societal legitimacy. Is the prize large enough to justify efforts to overcome these challenges?In today's extensive three-hour interview, Buterin and I cover:• What the blockchain has accomplished so far, and what it might achieve in the next decade;• Why many social problems can be viewed as a coordination failure to provide a public good;• Whether any of the ideas for decentralised social systems emerging from the blockchain community could really work;• His view of 'effective altruism' and 'long-termism';• Why he is optimistic about 'quadratic funding', but pessimistic about replacing existing voting with 'quadratic voting';• Why humanity might have to abandon living in cities;• And much more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
Imagine that – one day – humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out?In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably are.• Links to learn more, summary, and full transcript.• Paul's first appearance on the show in episode 44.• An out-take on decision theory.We could tell them hard-won lessons from history; mention some research questions we wish we'd started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons.But, as Christiano points out, even if we could satisfactorily figure out what we'd like to be able to tell our ancestors, that's just the first challenge. We'd need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth's surface quickly gets buried far underground.But even if we figure out a satisfactory message, and a ways to ensure it's found, a civilization this far in the future won't speak any language like our own. And being another species, they presumably won't share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn't break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery?That's just one of many playful questions discussed in today's episode with Christiano — a frequent writer who's willing to brave questions that others find too strange or hard to grapple with.We also talk about why divesting a little bit from harmful companies might be more useful than I'd been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider.Finally, we get a big update on progress in machine learning and efforts to make sure it's reliably aligned with our goals, which is Paul's main research project. He responds to the views that DeepMind's Pushmeet Kohli espoused in a previous episode, and we discuss whether we'd be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors.Some other issues that come up along the way include:• Are there any supplements people can take that make them think better?• What implications do our views on meta-ethics have for aligning AI with our goals?• Is there much of a risk that the future will contain anything optimised for causing harm?• An out-take about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.The 80,000 Hours Podcast is produced by Keiran Harris.
From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did.Some think this is the best historical analogy we have for how machine learning could alter life in the 21st century.In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to communicate quickly with units in the field over great distances.How might international security be altered if the impact of machine learning reaches a similar scope to that of electricity? Today's guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for such disruptive technical changes that might threaten international peace.• Links to learn more, summary and full transcript• Philosophy is one of the hardest grad programs. Is it worth it, if you want to use ideas to change the world? by Arden Koehler and Will MacAskill• The case for building expertise to work on US AI policy, and how to do it by Niel Bowerman• AI strategy and governance roles on the job boardTheir first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop 'intuitions' that inform their judgement about future cases. This is something humans do constantly, whether we're playing tennis, reading someone's face, diagnosing a patient, or figuring out which business ideas are likely to succeed.Sometimes these ML algorithms can seem uncannily insightful, and they're only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth -- all in the first five minutes of our day.Rapid advances in ML, and the many prospective military applications, have people worrying about an 'AI arms race' between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could "destabilize everything from nuclear détente to human friendships." Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands.But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy?In today's episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen's experience living and studying in China.We cover:• Why immigration is the main policy area that should be affected by AI advances today.• Why talking about an 'arms race' in AI is premature.• How Bobby Kennedy may have positively affected the Cuban Missile Crisis.• Whether it's possible to become a China expert and still get a security clearance.• Can access to ML algorithms be restricted, or is that just not practical?• Whether AI could help stabilise authoritarian regimes.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case?Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race.Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day.He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better.Along with other psychologists, he identified that many ordinary people are attracted to a 'folk probability' that draws just three distinctions — 'impossible', 'possible' and 'certain' — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% as against 57% likely.• Links to learn more, summary and full transcript• The calibration training app• Sign up for the Civ-5 counterfactual forecasting tournament• A review of the evidence on good forecasting practices• Learn more about Effective Altruism GlobalIn the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014.That was five years ago. In today's interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement.We discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to distinguish your '70 percents' from your '80 percents'.)We also bring some tough methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions that shape the world their profession, as it has been for Tetlock over many decades.We view Tetlock's work as so core to living well that we've brought him back for a second and longer appearance on the show — his first was back in episode 15.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition.The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably.In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism.How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks?Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens.He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable.• Links to learn more, summary and full transcript.• 80,000 Hours Annual Review 2018.• How to donate to 80,000 Hours.In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions.According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case.In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss:• How much people misrepresent their views in democratic countries.• Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis.• When is it justified to encourage your own group to polarise?• Sunstein's difficult experiences as a pioneer of animal rights law.• Whether activists can do better by spending half their resources on public opinion surveys.• Should people be more or less outspoken about their true views?• What might be the next social revolution to take off?• How can we learn about social movements that failed and disappeared?• How to find out what people really think.Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the transcript on our site.The 80,000 Hours Podcast is produced by Keiran Harris.
When you're building a bridge, responsibility for making sure it won't fall over isn't handed over to a few 'bridge not falling down engineers'. Making sure a bridge is safe to use and remains standing in a storm is completely central to the design, and indeed the entire project.When it comes to artificial intelligence, commentators often distinguish between enhancing the capabilities of machine learning systems and enhancing their safety. But to Pushmeet Kohli, principal scientist and research team leader at DeepMind, research to make AI robust and reliable is no more a side-project in AI design than keeping a bridge standing is a side-project in bridge design.Far from being an overhead on the 'real' work, it’s an essential part of making AI systems work at all. We don’t want AI systems to be out of alignment with our intentions, and that consideration must arise throughout their development.Professor Stuart Russell — co-author of the most popular AI textbook — has gone as far as to suggest that if this view is right, it may be time to retire the term ‘AI safety research’ altogether.• Want to be notified about high-impact opportunities to help ensure AI remains safe and beneficial? Tell us a bit about yourself and we’ll get in touch if an opportunity matches your background and interests.• Links to learn more, summary and full transcript.• And a few added thoughts on non-research roles.With the goal of designing systems that are reliably consistent with desired specifications, DeepMind have recently published work on important technical challenges for the machine learning community.For instance, Pushmeet is looking for efficient ways to test whether a system conforms to the desired specifications, even in peculiar situations, by creating an 'adversary' that proactively seeks out the worst failures possible. If the adversary can efficiently identify the worst-case input for a given model, DeepMind can catch rare failure cases before deploying a model in the real world. In the future single mistakes by autonomous systems may have very large consequences, which will make even small failure probabilities unacceptable. He's also looking into 'training specification-consistent models' and formal verification', while other researchers at DeepMind working on their AI safety agenda are figuring out how to understand agent incentives, avoid side-effects, and model AI rewards.In today’s interview, we focus on the convergence between broader AI research and robustness, as well as:• DeepMind’s work on the protein folding problem• Parallels between ML problems and past challenges in software development and computer security• How can you analyse the thinking of a neural network?• Unique challenges faced by DeepMind’s technical AGI safety team• How do you communicate with a non-human intelligence?• What are the biggest misunderstandings about AI safety and reliability?• Are there actually a lot of disagreements within the field?• The difficulty of forecasting AI developmentGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.The 80,000 Hours Podcast is produced by Keiran Harris.
This is a cross-post of some interviews Rob did recently on two other podcasts — Mission Daily (from 2m) and The Good Life (from 1h13m).Some of the content will be familiar to regular listeners — but if you’re at all interested in Rob’s personal thoughts, there should be quite a lot of new material to make listening worthwhile.The first interview is with Chad Grills. They focused largely on new technologies and existential risks, but also discuss topics like:• Why Rob is wary of fiction• Egalitarianism in the evolution of hunter gatherers• How to stop social media screwing up politics• Careers in government versus businessThe second interview is with Prof Andrew Leigh - the Shadow Assistant Treasurer in Australia. This one gets into more personal topics than we usually cover on the show, like: • What advice would Rob give to his teenage self?• Which person has most shaped Rob’s view of living an ethical life?• Rob’s approach to giving to the homeless• What does Rob do to maximise his own happiness?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
You’re 29 years old, and you’ve just been given a job in the White House. How do you quickly figure out how the US Executive Branch behemoth actually works, so that you can have as much impact as possible - before you quit or get kicked out?That was the challenge put in front of Tom Kalil in 1993.He had enough success to last a full 16 years inside the Clinton and Obama administrations, working to foster the development of the internet, then nanotechnology, and then cutting-edge brain modelling, among other things.But not everyone figures out how to move the needle. In today's interview, Tom shares his experience with how to increase your chances of getting an influential role in government, and how to make the most of the opportunity if you get in.Links to learn more, summary and full transcript.Interested in US AI policy careers? Apply for one-on-one career advice here.Vacancies at the Center for Security and Emerging Technology.Our high-impact job board, which features other related opportunities.He believes that Congressional gridlock leads people to greatly underestimate how much the Executive Branch can and does do on its own every day. Decisions by individuals change how billions of dollars are spent; regulations are enforced, and then suddenly they aren't; and a single sentence in the State of the Union can get civil servants to pay attention to a topic that would otherwise go ignored.Over years at the White House Office of Science and Technology Policy, 'Team Kalil' built up a white board of principles. For example, 'the schedule is your friend': setting a meeting date with the President can force people to finish something, where they otherwise might procrastinate.Or 'talk to who owns the paper'. People would wonder how Tom could get so many lines into the President's speeches. The answer was "figure out who's writing the speech, find them with the document, and tell them to add the line." Obvious, but not something most were doing.Not everything is a precise operation though. Tom also tells us the story of NetDay, a project that was put together at the last minute because the President incorrectly believed it was already organised – and decided he was going to announce it in person.In today's episode we get down to nuts & bolts, and discuss:• How did Tom spin work on a primary campaign into a job in the next White House?• Why does Tom think hiring is the most important work he did, and how did he decide who to bring onto the team?• How do you get people to do things when you don't have formal power over them?• What roles in the US government are most likely to help with the long-term future, or reducing existential risks?• Is it possible, or even desirable, to get the general public interested in abstract, long-term policy ideas?• What are 'policy entrepreneurs' and why do they matter?• What is the role for prizes in promoting science and technology? What are other promising policy ideas?• Why you can get more done by not taking credit.• What can the White House do if an agency isn't doing what it wants?• How can the effective altruism community improve the maturity of our policy recommendations?• How much can talented individuals accomplish during a short-term stay in government?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app..The 80,000 Hours Podcast is produced by Keiran Harris.
Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right?Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences. Many animals are hunted by predators, and constantly have to remain vigilant about the risk of being killed, and perhaps experiencing the horror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter animals freeze to death; in droughts they die of heat or thirst. There are fewer than 20 people in the world dedicating their lives to researching these problems.But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number could make the scale of the problem larger than most other near-term concerns.Links to learn more, summary and full transcript.Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can't survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death.But should we actually intervene? How do we know what animals are sentient? How often do animals feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could eventually allow us to massively improve wild animal welfare?For most of these big questions, the answer is: we don’t know. And Persis thinks we're far away from knowing enough to start interfering with ecosystems. But that's all the more reason to start looking at these questions. There are some concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours.In today’s interview we explore wild animal welfare as a new field of research, and discuss:• Do we have a moral duty towards wild animals or not?• How should we measure the number of wild animals?• What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate?• Is there a danger in imagining how we as humans would feel if we were put into their situation?• Should we eliminate parasites and predators?• How important are insects?• How strongly should we focus on just avoiding humans going in and making things worse?• How does this compare to work on farmed animal suffering?• The most compelling arguments for humanity not dedicating resources to wild animal welfare• Is there much of a case for the idea that this work could improve the very long-term future of humanity?Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss:• The importance of figuring out your values• Chemistry, psychology, and other different paths towards working on wild animal welfare• How to break into new fieldsGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
Governance matters. Policy change quickly took China from famine to fortune; Singapore from swamps to skyscrapers; and Hong Kong from fishing village to financial centre. Unfortunately, many governments are hard to reform and — to put it mildly — it's not easy to found a new country.This has prompted poverty-fighters and political dreamers to look for creative ways to get new and better 'pseudo-countries' off the ground. The poor could then voluntary migrate to in search of security and prosperity. And innovators would be free to experiment with new political and legal systems without having to impose their ideas on existing jurisdictions. The 'seasteading movement' imagined founding new self-governing cities on the sea, but obvious challenges have kept that one on the drawing board. Nobel Prize winner and World Bank President Paul Romer suggested 'charter cities', where a host country would volunteer for another country with better legal institutions to effectively govern some of its territory. But that idea too ran aground for political, practical and personal reasons.Now Mark Lutter and Tamara Winter, of The Center for Innovative Governance Research (CIGR), are reviving the idea of 'charter cities', with some modifications. Gone is the idea of transferring sovereignty. Instead these cities would look more like the 'special economic zones' that worked miracles for Taiwan and China among others. But rather than keep the rest of the country's rules with a few pieces removed, they hope to start from scratch, opting in to the laws they want to keep, in order to leap forward to "best practices in commercial law."Links to learn more, summary and full transcript.Rob on The Good Life: Andrew Leigh in Conversation — on 'making the most of your 80,000 hours'.The project has quickly gotten attention, with Mark and Tamara receiving funding from Tyler Cowen's Emergent Ventures (discussed in episode 45) and winning a Pioneer tournament.Starting afresh with a new city makes it possible to clear away thousands of harmful rules without having to fight each of the thousands of interest groups that will viciously defend their privileges. Initially the city can fund infrastructure and public services by gradually selling off its land, which appreciates as the city flourishes. And with 40 million people relocating to cities every year, there are plenty of prospective migrants.CIGR is fleshing out how these arrangements would work, advocating for them, and developing supporting services that make it easier for any jurisdiction to implement. They're currently in the process of influencing a new prospective satellite city in Zambia.Of course, one can raise many criticisms of this idea: Is it likely to be taken up? Is CIGR really doing the right things to make it happen? Will it really reduce poverty if it is?We discuss those questions, as well as:• How did Mark get a new organisation off the ground, with fundraising and other staff?• What made China's 'special economic zones' so successful?• What are the biggest challenges in getting new cities off the ground?• How did Mark find and hire Tamara? How did he know this was a good idea?• Should people care about this idea if they aren't focussed on tackling poverty?• Why aren't people already doing this?• Why does Tamara support more people starting families?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm.How is this possible and what does it show?In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems.A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy. Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map.When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space.Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it Links to learn more, summary and full transcriptThis is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software.The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy.Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2.We discuss:• What are the most significant changes in the AI policy world over the last year or two?• What capabilities are likely to develop over the next five, 10, 15, 20 years?• How much should we focus on the next couple of years, versus the next couple of decades?• How should we approach possible malicious uses of AI?• What are some of the potential ways OpenAI could make things worse, and how can they be avoided?• Publication norms for AI research• Where do we stand in terms of arms races between countries or different AI labs?• The case for creating newsletters• Should the AI community have a closer relationship to the military?• Working at OpenAI vs. working in the US government• How valuable is Twitter in the AI policy world?Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss:• The reaction to OpenAI's release of GPT-2• Jack’s critique of our US AI policy article• How valuable are roles in government?• Where do you start if you want to write content for a specific audience?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.The 80,000 Hours Podcast is produced by Keiran Harris.
“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets?Funded by the Rockefeller Foundation and given very little editorial direction, Vox's Future Perfect aspires to be more or less that.Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours' work.But according to Kelsey Piper, staff writer for this new section of Vox's website focused on effective altruist themes, Future Perfect's goal is to run in the opposite direction and make room for more substantive coverage that's not tied to the news cycle.They hope that in the long-term talented writers from other outlets across the political spectrum can also be attracted to tackle these topics.Links to learn more, summary and full transcript.Links to Kelsey's top articles.Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them.Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join "that weird Silicon Valley apocalypse thing"? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I'm glad you're working on it.”Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics.If that's right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world's most pressing problems.Kelsey points out that one needn't take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself.In today’s episode we discuss that path, as well as:• What’s the day to day life of a Vox journalist like?• How can good journalism get funded?• Are there meaningful tradeoffs between doing what's in the interest of Vox and doing what’s good?• How concerned should we be about the risk of effective altruism being perceived as partisan?• How well can short articles effectively communicate complicated ideas?• Are there alternative business models that could fund high quality journalism on a larger scale?• How do you approach the case for taking AI seriously to a broader audience?• How valuable might it be for media outlets to do Tetlock-style forecasting?• Is it really a good idea to heavily tax billionaires?• How do you avoid the pressure to get clicks?• How possible is it to predict which articles are going to be popular?• How did Kelsey build the skills necessary to work at Vox?• General lessons for people dealing with very difficult life circumstancesRob is then joined by two of his colleagues – Keiran Harris & Michelle Hutchinson – to quickly discuss:• The risk political polarisation poses to long-termist causes• How should specialists keep journalism available as a career option?• Should we create a news aggregator that aims to make someone as well informed as possible in big-picture terms?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
This is a cross-post of an interview Rob did with Julia Galef on her podcast Rationally Speaking. Rob and Julia discuss how the career advice 80,000 Hours gives has changed over the years, and the biggest misconceptions about our views.The topics will be familiar to the most fervent fans of this show — but we think that if you’ve listened to less than about half of the episodes we've released so far, you’ll find something new to enjoy here. Julia may be familiar to you as the guest on episode 7 of the show, way back in September 2017.The conversation also covers topics like:• How many people should try to get a job in finance and donate their income?• The case for working to reduce global catastrophic risks in targeted ways, and historical precedents for this kind of work• Why reducing risk is a better way to help the future than increasing economic growth• What percentage of the world should ideally follow 80,000 Hours advice?Links to learn more, summary and full transcript.If you’re interested in the cooling and expansion of the universe, which comes up on the show, you should definitely check out our 29th episode with Dr Anders Sandberg.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into any podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
Pro-market economists love to wax rhapsodic about the capacity of markets to pull together the valuable local information spread across all of society about what people want and how to make it.But when it comes to politics and voting - which also aim to aggregate the preferences and knowledge found in millions of individuals - the enthusiasm for finding clever institutional designs often turns to skepticism.Today's guest, freewheeling economist Glen Weyl, won't have it, and is on a warpath to reform liberal democratic institutions in order to save them. Just last year he wrote Radical Markets: Uprooting Capitalism and Democracy for a Just Society with Eric Posner, but has already moved on, saying "in the 6 months since the book came out I've made more intellectual progress than in the whole 10 years before that."Weyl believes we desperately need more efficient, equitable and decentralised ways to organise society, that take advantage of what each person knows, and his research agenda has already been making breakthroughs.Links to learn more, summary and full transcriptOur high impact job boardJoin our newsletterDespite a history in the best economics departments in the world - Harvard, Princeton, Yale and the University of Chicago - he is too worried for the future to sit in his office writing papers. Instead he has left the academy to try to inspire a social movement, RadicalxChange, with a vision of social reform as expansive as his own.You can sign up for their conference in Detroit in March hereEconomist Alex Tabarrok called his latest proposal, known as 'liberal radicalism', "a quantum leap in public-goods mechanism-design" - we explain how it works in the show. But the proposal, however good in theory, might struggle in the real world because it requires large subsidies, and compensates for people's selfishness so effectively that it might even be an overcorrection.An earlier mechanism - 'quadratic voting' (QV) - would allow people to express the relative strength of their preferences in the democratic process. No longer would 51 people who support a proposal, but barely care about the issue, outvote 49 incredibly passionate opponents, predictably making society worse in the process. We explain exactly how in the episode.Weyl points to studies showing that people are more likely to vote strongly not only about issues they *care* more about, but issues they *know* more about. He expects that allowing people to specialise and indicate when they know what they're talking about will create a democracy that does more to aggregate careful judgement, rather than just passionate ignorance.But these and indeed all of Weyl's ideas have faced criticism. Some say the risk of unintended consequences is too great, or that they solve the wrong problem. Others see these proposals as unproven, impractical, or just another example of an intellectual engaged in grand social planning. I raise these concerns to see how he responds.As big a topic as all of that is, this extended conversation also goes into the blockchain, problems with the effective altruism community and how auctions could replace private property. Don't miss it.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
Rebroadcast: this episode was originally released in October 2017.What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people like Dr Nick Beckstead.Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion.Links to learn more, episode summary & full transcriptThese are the world’s highest impact career paths according to our researchWhy despite global progress, humanity is probably facing its most dangerous time everThis episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including:* Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes of this episode is a snappier version of my conversation with Toby Ord.)* Is clean meat (aka *in vitro* meat) technologically feasible any time soon, or should we be looking for plant-based alternatives?* What are the greatest risks to human civilisation?* To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets?* Should people who want to improve the future work for changes that will be very useful in a specific scenario, or just generally try to improve how well humanity makes decisions?* What specific jobs should our listeners take in order for Nick to be able to spend more money in useful ways to improve the world?* Should we expect the future to be better if the economy grows more quickly - or more slowly?Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app.The 80,000 Hours Podcast is produced by Keiran Harris.
loading
Comments (1)

Sam McMahon

A thought provoking, refreshing podcast

Oct 26th
Reply
loading
Download from Google Play
Download from App Store