Discover80,000 Hours Podcast with Rob Wiblin
80,000 Hours Podcast with Rob Wiblin
Claim Ownership

80,000 Hours Podcast with Rob Wiblin

Author: The 80000 Hours team

Subscribed: 3,424Played: 154,621
Share

Description

A show about the world's most pressing problems and how you can use your career to solve them.

Subscribe by searching for '80,000 Hours' wherever you get podcasts.

Hosted by Rob Wiblin, Director of Research at 80,000 Hours.
90 Episodes
Reverse
Today's bonus episode of the podcast is a quick conversation between me and my fellow 80,000 Hours researcher Arden Koehler about a few topics, including the demandingness of morality, work-life balance, and emotional reactions to injustice. Arden is about to graduate with a philosophy PhD from New York University, so naturally we dive right into some challenging implications of utilitarian philosophy and how it might be applied to real life. So far the feedback on the post-episode chats that we've done have been positive, so we thought we'd go ahead and try out this freestanding one. But fair warning: it's among the more difficult episodes to follow, and probably not the best one to listen to first, as you'll benefit from having more context! If you'd like to listen to more of Arden you can find her in episode 67, David Chalmers on the nature and ethics of consciousness, or episode 66, Peter Singer on being provocative, EA, and how his moral views have changed. Here's more information on some of the issues we touch on: • Consequentialism on Wikipedia • Appropriate dispositions on the Stanford Encyclopaedia of Philosophy • Demandingness objection on Wikipedia • And a paper on epistemic normativity. ——— I mention the call for papers of the Academic Workshop on Global Priorities in the introduction — you can learn more here. And finally, Toby Ord — one of our founding Trustees and a Senior Research Fellow in Philosophy at Oxford University — has his new book The Precipice: Existential Risk and the Future of Humanity coming out next week. I've read it and very much enjoyed it. Find out where you can pre-order it here. We'll have an interview with him coming up soon.
nCoV is alarming governments and citizens around the world. It has killed more than 1,000 people, brought the Chinese economy to a standstill, and continues to show up in more and more places. But bad though it is, it's much closer to a warning shot than a worst case scenario. The next emerging infectious disease could easily be more contagious, more fatal, or both. Despite improvements in the last few decades, humanity is still not nearly prepared enough to contain new diseases. We identify them too slowly. We can't do enough to reduce their spread. And we lack vaccines or drugs treatments for at least a year, if they ever arrive at all. • Links to learn more, summary and full transcript. This is a precarious situation, especially with advances in biotechnology increasing our ability to modify viruses and bacteria as we like. In today's episode, Cassidy Nelson, a medical doctor and research scholar at Oxford University's Future of Humanity Institute, explains 12 things her research group think urgently need to happen if we're to keep the risk at acceptable levels. The ideas are: Science 1. Roll out genetic sequencing tests that lets you test someone for all known and unknown pathogens in one go. 2. Fund research into faster ‘platform’ methods for going from pathogen to vaccine, perhaps using innovation prizes. 3. Fund R&D into broad-spectrum drugs, especially antivirals, similar to how we have generic antibiotics against multiple types of bacteria. Response 4. Develop a national plan for responding to a severe pandemic, regardless of the cause. Have a backup plan for when things are so bad the normal processes have stopped working entirely. 5. Rigorously evaluate in what situations travel bans are warranted. (They're more often counterproductive.) 6. Coax countries into more rapidly sharing their medical data, so that during an outbreak the disease can be understood and countermeasures deployed as quickly as possible. 7. Set up genetic surveillance in hospitals, public transport and elsewhere, to detect new pathogens before an outbreak — or even before patients develop symptoms. 8. Run regular tabletop exercises within governments to simulate how a pandemic response would play out. Oversight 9. Mandate disclosure of accidents in the biosafety labs which handle the most dangerous pathogens. 10. Figure out how to govern DNA synthesis businesses, to make it harder to mail order the DNA of a dangerous pathogen. 11. Require full cost-benefit analysis of 'dual-use' research projects that can generate global risks. 12. And finally, to maintain momentum, it's necessary to clearly assign responsibility for the above to particular individuals and organisations. These advances can be pursued by politicians and public servants, as well as academics, entrepreneurs and doctors, opening the door for many listeners to pitch in to help solve this incredibly pressing problem. In the episode Rob and Cassidy also talk about: • How Cassidy went from clinical medicine to a PhD studying novel pathogens with pandemic potential. • The pros, and significant cons, of travel restrictions. • Whether the same policies work for natural and anthropogenic pandemics. • Ways listeners can pursue a career in biosecurity. • Where we stand with nCoV as of today. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Transcriptions: Zakee Ulhaq.
The State Council of China's 2017 AI plan was the starting point of China’s AI planning; China’s approach to AI is defined by its top-down and monolithic nature; China is winning the AI arms race; and there is little to no discussion of issues of AI ethics and safety in China. How many of these ideas have you heard? In his paper Deciphering China's AI Dream, today's guest, PhD student Jeff Ding, outlines why he believes none of these claims are true. • Links to learn more, summary and full transcript. • What’s the best charity to donate to? He first places China’s new AI strategy in the context of its past science and technology plans, as well as other countries’ AI plans. What is China actually doing in the space of AI development? Jeff emphasises that China's AI strategy did not appear out of nowhere with the 2017 state council AI development plan, which attracted a lot of overseas attention. Rather that was just another step forward in a long trajectory of increasing focus on science and technology. It's connected with a plan to develop an 'Internet of Things', and linked to a history of strategic planning for technology in areas like aerospace and biotechnology. And it was not just the central government that was moving in this space; companies were already pushing forward in AI development, and local level governments already had their own AI plans. You could argue that the central government was following their lead in AI more than the reverse. What are the different levers that China is pulling to try to spur AI development? Here, Jeff wanted to challenge the myth that China's AI development plan is based on a monolithic central plan requiring people to develop AI. In fact, bureaucratic agencies, companies, academic labs, and local governments each set up their own strategies, which sometimes conflict with the central government. Are China's AI capabilities especially impressive? In the paper Jeff develops a new index to measure and compare the US and China's progress in AI. Jeff’s AI Potential Index — which incorporates trends and capabilities in data, hardware, research and talent, and the commercial AI ecosystem — indicates China’s AI capabilities are about half those of America. His measure, though imperfect, dispels the notion that China's AI capabilities have surpassed the US or make it the world's leading AI power. Following that 2017 plan, a lot of Western observers thought that to have a good national AI strategy we'd need to figure out how to play catch-up with China. Yet Chinese strategic thinkers and writers at the time actually thought that they were behind — because the Obama administration had issued a series of three white papers in 2016. Finally, Jeff turns to the potential consequences of China’s AI dream for issues of national security, economic development, AI safety and social governance. He claims that, despite the widespread belief to the contrary, substantive discussions about AI safety and ethics are indeed emerging in China. For instance, a new book from Tencent’s Research Institute is proactive in calling for stronger awareness of AI safety issues. In today’s episode, Rob and Jeff go through this widely-discussed report, and also cover: • The best analogies for thinking about the growing influence of AI • How do prominent Chinese figures think about AI? • Coordination with China • China’s social credit system • Suggestions for people who want to become professional China specialists • And more. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
Two 80,000 Hours researchers, Robert Wiblin and Howie Lempel, record an experimental bonus episode about the new 2019-nCoV virus.See this list of resources, including many discussed in the episode, to learn more.In the 1h15m conversation we cover:• What is it? • How many people have it? • How contagious is it? • What fraction of people who contract it die?• How likely is it to spread out of control?• What's the range of plausible fatalities worldwide?• How does it compare to other epidemics?• What don't we know and why? • What actions should listeners take, if any?• How should the complexities of the above be communicated by public health professionals?Here's a link to the hygiene advice from Laurie Garrett mentioned in the episode.Recorded 2 Feb 2020.The 80,000 Hours Podcast is produced by Keiran Harris.
You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it? A committed consequentialist might say, "Sure! Free money!" But most will think it obvious that you should say no. You've only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die. And yet, according to today’s return guest, philosophy Prof Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others. • Links to learn more, summary and full transcript. • Job opportunities at the Global Priorities Institute. To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you've probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you've changed the identity of a future person. That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. After 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies. As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the 'new' people will cause car crashes that wouldn't have occurred in their absence, including crashes that prematurely kill people alive today. Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise. So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie, worth $10. Should you do it? This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers. Because most 'non-consequentialists' endorse an act/omission distinction… post truncated due to character limit, finish reading the full explanation here. So what's the best way to fix this strange conclusion? We discuss a few options, but the most promising might bring people a lot closer to full consequentialism than is immediately apparent. In this episode Will and I also cover: • Are, or are we not, living in the most influential time in history? • The culture of the effective altruism community • Will's new lower estimate of the risk of human extinction • Why Will is now less focused on AI • The differences between Americans and Brits • Why feeling guilty about characteristics you were born with is crazy • And plenty more. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.
Rebroadcast: this episode was originally released in October 2018. Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening — Paul works on AI himself and has a very unusually thought through view of how it will change the world. This is now the top resource I'm going to refer people to if they're interested in positively shaping the development of AI, and want to understand the problem better. Even though I'm familiar with Paul's writing I felt I was learning a great deal and am now in a better position to make a difference to the world. A few of the topics we cover are:• Why Paul expects AI to transform the world gradually rather than explosively and what that would look like • Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us • Why AI systems will probably be granted legal and property rights • How an advanced AI that doesn't share human goals could still have moral value • Why machine learning might take over science research from humans before it can do most other tasks • Which decade we should expect human labour to become obsolete, and how this should affect your savings plan. • Links to learn more, summary and full transcript. • Rohin Shah's AI alignment newsletter. Here's a situation we all regularly confront: you want to answer a difficult question, but aren't quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who *are* smart enough to figure it out. The bad news is that they disagree. If given plenty of time — and enough arguments, counterarguments and counter-counter-arguments between all the experts — should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over? In other words: does 'debate', in principle, lead to truth? According to Paul Christiano — researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities — this question is of more than mere philosophical interest. That's because 'debate' is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are. It's a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight. If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful. But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don't know that's the case. The 80,000 Hours Podcast is produced by Keiran Harris.
Rebroadcast: this episode was originally released in May 2018. Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? According to Bryan Caplan in episode #32, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees. Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner. Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University. Full transcript of the conversation, summary, and links to learn more. The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions. Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford. His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base. Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including: • Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done? • How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened? • If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians? • What long-shot drugs can people take in their 70s to stave off death? • Can science extend human (waking) life by cutting our need to sleep? • How bad would it be if a solar flare took down the electricity grid? Could it happen? • If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it? • Will lifelike robots make us more inclined to dehumanise one another? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
Rebroadcast: this episode was originally released in January 2018. Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races. Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?• Full transcript, key points & links to articles discussed in the show. If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide. Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism (EA) community. In this interview we discuss a wide range of topics: • How would we go about a ‘long reflection’ to fix our moral errors? • Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’? • If we basically solve existential risks, what does humanity do next? • What are some of Will’s most unusual philosophical positions? • What are the best arguments for and against utilitarianism? • Given disagreements among philosophers, how much should we believe the findings of philosophy as a field? • What are some the biases we should be aware of within academia? • What are some of the downsides of becoming a professor? • What are the merits of becoming a philosopher? • How does the media image of EA differ to the actual goals of the community? • What kinds of things would you like to see the EA community do differently? • How much should we explore potentially controversial ideas? • How focused should we be on diversity? • What are the best arguments against effective altruism? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
Rebroadcast: this episode was originally released in October 2018. The barista gives you your coffee and change, and you walk away from the busy line. But you suddenly realise she gave you $1 less than she should have. Do you brush your way past the people now waiting, or just accept this as a dollar you’re never getting back? According to philosophy Professor Hilary Greaves - Director of Oxford University's Global Priorities Institute, which is hiring - this simple decision will completely change the long-term future by altering the identities of almost all future generations. How? Because by rushing back to the counter, you slightly change the timing of everything else people in line do during that day - including changing the timing of the interactions they have with everyone else. Eventually these causal links will reach someone who was going to conceive a child. By causing a child to be conceived a few fractions of a second earlier or later, you change the sperm that fertilizes their egg, resulting in a totally different person. So asking for that $1 has now made the difference between all the things that this actual child will do in their life, and all the things that the merely possible child - who didn't exist because of what you did - would have done if you decided not to worry about it. As that child's actions ripple out to everyone else who conceives down the generations, ultimately the entire human population will become different, all for the sake of your dollar. Will your choice cause a future Hitler to be born, or not to be born? Probably both! • Links to learn more, summary and full transcript. Some find this concerning. The actual long term effects of your decisions are so unpredictable, it looks like you’re totally clueless about what's going to lead to the best outcomes. It might lead to decision paralysis - you won’t be able to take any action at all. Prof Greaves doesn’t share this concern for most real life decisions. If there’s no reasonable way to assign probabilities to far-future outcomes, then the possibility that you might make things better in completely unpredictable ways is more or less canceled out by equally likely opposite possibility. But, if instead we’re talking about a decision that involves highly-structured, systematic reasons for thinking there might be a general tendency of your action to make things better or worse -- for example if we increase economic growth -- Prof Greaves says that we don’t get to just ignore the unforeseeable effects. When there are complex arguments on both sides, it's unclear what probabilities you should assign to this or that claim. Yet, given its importance, whether you should take the action in question actually does depend on figuring out these numbers. So, what do we do? Today’s episode blends philosophy with an exploration of the mission and research agenda of the Global Priorities Institute: to develop the effective altruism movement within academia. We cover: • How controversial is the multiverse interpretation of quantum physics? • Given moral uncertainty, how should population ethics affect our real life decisions? • What are the consequences of cluelessness for those who based their donation advice on GiveWell style recommendations? • How could reducing extinction risk be a good cause for risk-averse people? Get this episode by subscribing: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. • Links to learn more, summary and full transcript. • Advice on how to read our advice. • Anonymous answers on: bad habits, risk and failure. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness. This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter. These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything? Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far. Get this episode by subscribing to our show on the world’s most pressing problems and how to solve them: search for 80,000 Hours in your podcasting app. Producer: Keiran Harris.
In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off. According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention. But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times. • Singer's book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free e-book and audiobook, read by a range of celebrities. Get it here. • Links to learn more, summary and full transcript. Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one? Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences, but Singer says that he gives public relations considerations plenty of thought. One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump. Another is the focus of the effective altruism community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement. He suspects there's a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns. Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover: • What does he think is the most plausible alternatives to consequentialism? • Is it more humane to eat wild caught animals than farmed animals? • The re-release of The Life You Can Save • His most and least strategic career decisions • Population ethics, and other arguments for and against prioritising the long-term future • What led to his changing his mind on significant questions in moral philosophy? • And more. In the post-episode discussion, Rob and Arden continue talking about: • The pros and cons of keeping EA as one big movement • Singer’s thoughts on immigration • And consequentialism with side constraints. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq. Illustration of Singer: Matthias Seifarth.
"…it started when the Soviet Union fell apart and there was a real desire to ensure security of nuclear materials and pathogens, and that scientists with [WMD-related] knowledge could get paid so that they wouldn't go to countries and sell that knowledge." Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security. Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. In 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS). But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation. In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries. • Links to learn more, summary and full transcript. • Talks from over 100 other speakers at EA Global. • Having trouble with podcast 'chapters' on this episode? Please report any problems to keiran at 80000hours dot org. What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next. Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the previous one. Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda discussed at length in episode 27. Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the 9/11 Commission. Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11. And as if that all weren't curious enough four years ago Bonnie decided to go vegan. We talk about her work so far as well as: • How listeners can start a career like hers • Mistakes made by Mr Obama and Mr Trump • Networking, the value of attention, and being a vegan in DC • And 2020 Presidential candidates. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.   November 3 2020, 11:46PM: The NY Times and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don't see how they can figure it out. What on Earth happens next? Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result. Unfortunately the US has no recovery system for a situation like this, unlike Parliamentary democracies, which can just rerun the election a few weeks later. • Links to learn more, summary and full transcript. • Motivating article: Information security careers for global catastrophic risk reduction by Zabel and Muehlhauser The constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker. Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn't fair. Schneier thinks there's a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage. And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits. According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they're designing, because they have a bureaucrat's rather than a hacker's mindset. The ideal computer security expert walks into a shop and thinks, "You know, here's how I would shoplift." They automatically see where the cameras are, whether there are alarms, and where the security guards aren't watching. In this episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn't get access to them. We also cover: • How can we have surveillance of dangerous actors, without falling back into authoritarianism? • When if ever should information about weaknesses in society's security be kept secret? • How secure are nuclear weapons systems around the world? • How worried should we be about deep-fakes? • Schneier’s critiques of blockchain technology • How technologists should be vital in shaping policy • What are the most consequential computer security problems today? • Could a career in information security be very useful for reducing global catastrophic risks? • And more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript. The 80,000 Hours Podcast is produced by Keiran Harris.
Today's episode is a compilation of interviews I recently recorded for two other shows, Love Your Work and The Neoliberal Podcast.  If you've listened to absolutely everything on this podcast feed, you'll have heard four interviews with me already, but fortunately I don't think these two include much repetition, and I've gotten a decent amount of positive feedback on both.  First up, I speak with David Kadavy on his show, Love Your Work.  This is a particularly personal and relaxed interview. We talk about all sorts of things, including nicotine gum, plastic straw bans, whether recycling is important, how many lives a doctor saves, why interviews should go for at least 2 hours, how athletes doping could be good for the world, and many other fun topics.  • Our annual impact survey is about to close — I'd really appreciate if you could take 3–10 minutes to fill it out now.  • The blog post about this episode. At some points we even actually discuss effective altruism and 80,000 Hours, but you can easily skip through those bits if they feel too familiar.  The second interview is with Jeremiah Johnson on the Neoliberal Podcast. It starts 2 hours and 15 minutes into this recording.  Neoliberalism in the sense used by this show is not the free market fundamentalism you might associate with the term. Rather it's a centrist or even centre-left view that supports things like social liberalism, multilateral international institutions, trade, high rates of migration, racial justice, inclusive institutions, financial redistribution, prioritising the global poor, market urbanism, and environmental sustainability.  This is the more demanding of the two conversations, as listeners to that show have already heard of effective altruism, so we were able to get the best arguments Jeremiah could offer against focusing on improving the long term future of the world.  Jeremiah is more of a fan of donating to evidence-backed global health charities recommended by GiveWell, and does so himself.  I appreciate him having done his homework and forcing me to do my best to explain how well my views can stand up to counterarguments. It was a challenge for me to paint the whole picture in the half an hour we spent on longterm and I expect there's answers in there which will be fresh even for regular listeners.  I hope you enjoy both conversations! Feel free to email me with any feedback. The 80,000 Hours Podcast is produced by Keiran Harris.
1. Fill out our annual impact survey here. 2. Find a great vacancy on our job board. 3. Learn about our key ideas, and get links to our top articles. 4. Join our newsletter for an email about what's new, every 2 weeks or so. 5. Or follow our pages on Facebook and Twitter. —— Once a year 80,000 Hours runs a survey to find out whether we've helped our users have a larger social impact with their life and career. We and our donors need to know whether our services, like this podcast, are helping people enough to continue them or scale them up, and it's only by hearing from you that we can make these decisions in a sensible way. So, if 80,000 Hours' podcast, job board, articles, headhunting, advising or other projects have somehow contributed to your life or career plans, please take 3–10 minutes to let us know how. You can also let us know where we've fallen short, which helps us fix problems with what we're doing. We've refreshed the survey this year, hopefully making it easier to fill out than in the past. We'll keep this appeal up for about two weeks, but if you fill it out now that means you definitely won't forget! Thanks so much, and talk to you again in a normal episode soon. — RobTag for internal use: this RSS feed is originating in BackTracks.
Historically, progress in the field of cryptography has had major consequences. It has changed the course of major wars, made it possible to do business on the internet, and enabled private communication between both law-abiding citizens and dangerous criminals. Could it have similarly significant consequences in future? Today's guest — Vitalik Buterin — is world-famous as the lead developer of Ethereum, a successor to the cryptographic-currency Bitcoin, which added the capacity for smart contracts and decentralised organisations. Buterin first proposed Ethereum at the age of 20, and by the age of 23 its success had likely made him a billionaire. At the same time, far from indulging hype about these so-called 'blockchain' technologies, he has been candid about the limited good accomplished by Bitcoin and other currencies developed using cryptographic tools — and the breakthroughs that will be needed before they can have a meaningful social impact. In his own words, *"blockchains as they currently exist are in many ways a joke, right?"* But Buterin is not just a realist. He's also an idealist, who has been helping to advance big ideas for new social institutions that might help people better coordinate to pursue their shared goals. Links to learn more, summary and full transcript. By combining theories in economics and mechanism design with advances in cryptography, he has been pioneering the new interdiscriplinary field of 'cryptoeconomics'. Economist Tyler Cowen hasobserved that, "at 25, Vitalik appears to repeatedly rediscover important economics results from famous papers, without knowing about the papers at all." Along with previous guest Glen Weyl, Buterin has helped develop a model for so-called 'quadratic funding', which in principle could transform the provision of 'public goods'. That is, goods that people benefit from whether they help pay for them or not. Examples of goods that are fully or partially 'public goods' include sound decision-making in government, international peace, scientific advances, disease control, the existence of smart journalism, preventing climate change, deflecting asteroids headed to Earth, and the elimination of suffering. Their underprovision in part reflects the difficulty of getting people to pay for anything when they can instead free-ride on the efforts of others. Anything that could reduce this failure of coordination might transform the world. But these and other related proposals face major hurdles. They're vulnerable to collusion, might be used to fund scams, and remain untested at a small scale — not to mention that anything with a square root sign in it is going to struggle to achieve societal legitimacy. Is the prize large enough to justify efforts to overcome these challenges? In today's extensive three-hour interview, Buterin and I cover: • What the blockchain has accomplished so far, and what it might achieve in the next decade; • Why many social problems can be viewed as a coordination failure to provide a public good; • Whether any of the ideas for decentralised social systems emerging from the blockchain community could really work; • His view of 'effective altruism' and 'long-termism'; • Why he is optimistic about 'quadratic funding', but pessimistic about replacing existing voting with 'quadratic voting'; • Why humanity might have to abandon living in cities; • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
Imagine that – one day – humanity dies out. At some point, many millions of years later, intelligent life might well evolve again. Is there any message we could leave that would reliably help them out? In his second appearance on the 80,000 Hours Podcast, machine learning researcher and polymath Paul Christiano suggests we try to answer this question with a related thought experiment: are there any messages we might want to send back to our ancestors in the year 1700 that would have made history likely to go in a better direction than it did? It seems there probably are. • Links to learn more, summary, and full transcript. • Paul's first appearance on the show in episode 44. • An out-take on decision theory. We could tell them hard-won lessons from history; mention some research questions we wish we'd started addressing earlier; hand over all the social science we have that fosters peace and cooperation; and at the same time steer clear of engineering hints that would speed up the development of dangerous weapons. But, as Christiano points out, even if we could satisfactorily figure out what we'd like to be able to tell our ancestors, that's just the first challenge. We'd need to leave the message somewhere that they could identify and dig up. While there are some promising options, this turns out to be remarkably hard to do, as anything we put on the Earth's surface quickly gets buried far underground. But even if we figure out a satisfactory message, and a ways to ensure it's found, a civilization this far in the future won't speak any language like our own. And being another species, they presumably won't share as many fundamental concepts with us as humans from 1700. If we knew a way to leave them thousands of books and pictures in a material that wouldn't break down, would they be able to decipher what we meant to tell them, or would it simply remain a mystery? That's just one of many playful questions discussed in today's episode with Christiano — a frequent writer who's willing to brave questions that others find too strange or hard to grapple with. We also talk about why divesting a little bit from harmful companies might be more useful than I'd been thinking. Or whether creatine might make us a bit smarter, and carbon dioxide filled conference rooms make us a lot stupider. Finally, we get a big update on progress in machine learning and efforts to make sure it's reliably aligned with our goals, which is Paul's main research project. He responds to the views that DeepMind's Pushmeet Kohli espoused in a previous episode, and we discuss whether we'd be better off if AI progress turned out to be most limited by algorithmic insights, or by our ability to manufacture enough computer processors. Some other issues that come up along the way include: • Are there any supplements people can take that make them think better? • What implications do our views on meta-ethics have for aligning AI with our goals? • Is there much of a risk that the future will contain anything optimised for causing harm? • An out-take about the implications of decision theory, which we decided was too confusing and confused to stay in the main recording. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.
From 1870 to 1950, the introduction of electricity transformed life in the US and UK, as people gained access to lighting, radio and a wide range of household appliances for the first time. Electricity turned out to be a general purpose technology that could help with almost everything people did. Some think this is the best historical analogy we have for how machine learning could alter life in the 21st century. In addition to massively changing everyday life, past general purpose technologies have also changed the nature of war. For example, when electricity was introduced to the battlefield, commanders gained the ability to communicate quickly with units in the field over great distances. How might international security be altered if the impact of machine learning reaches a similar scope to that of electricity? Today's guest — Helen Toner — recently helped found the Center for Security and Emerging Technology at Georgetown University to help policymakers prepare for such disruptive technical changes that might threaten international peace. • Links to learn more, summary and full transcript • Philosophy is one of the hardest grad programs. Is it worth it, if you want to use ideas to change the world? by Arden Koehler and Will MacAskill • The case for building expertise to work on US AI policy, and how to do it by Niel Bowerman • AI strategy and governance roles on the job board Their first focus is machine learning (ML), a technology which allows computers to recognise patterns, learn from them, and develop 'intuitions' that inform their judgement about future cases. This is something humans do constantly, whether we're playing tennis, reading someone's face, diagnosing a patient, or figuring out which business ideas are likely to succeed. Sometimes these ML algorithms can seem uncannily insightful, and they're only getting better over time. Ultimately a wide range of different ML algorithms could end up helping us with all kinds of decisions, just as electricity wakes us up, makes us coffee, and brushes our teeth -- all in the first five minutes of our day. Rapid advances in ML, and the many prospective military applications, have people worrying about an 'AI arms race' between the US and China. Henry Kissinger and the past CEO of Google Eric Schmidt recently wrote that AI could "destabilize everything from nuclear détente to human friendships." Some politicians talk of classifying and restricting access to ML algorithms, lest they fall into the wrong hands. But if electricity is the best analogy, you could reasonably ask — was there an arms race in electricity in the 19th century? Would that have made any sense? And could someone have changed the course of history by changing who first got electricity and how they used it, or is that a fantasy? In today's episode we discuss the research frontier in the emerging field of AI policy and governance, how to have a career shaping US government policy, and Helen's experience living and studying in China. We cover: • Why immigration is the main policy area that should be affected by AI advances today. • Why talking about an 'arms race' in AI is premature. • How Bobby Kennedy may have positively affected the Cuban Missile Crisis. • Whether it's possible to become a China expert and still get a security clearance. • Can access to ML algorithms be restricted, or is that just not practical? • Whether AI could help stabilise authoritarian regimes. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
Have you ever been infuriated by a doctor's unwillingness to give you an honest, probabilistic estimate about what to expect? Or a lawyer who won't tell you the chances you'll win your case? Their behaviour is so frustrating because accurately predicting the future is central to every action we take. If we can't assess the likelihood of different outcomes we're in a complete bind, whether the decision concerns war and peace, work and study, or Black Mirror and RuPaul's Drag Race. Which is why the research of Professor Philip Tetlock is relevant for all of us each and every day. He has spent 40 years as a meticulous social scientist, collecting millions of predictions from tens of thousands of people, in order to figure out how good humans really are at foreseeing the future, and what habits of thought allow us to do better. Along with other psychologists, he identified that many ordinary people are attracted to a 'folk probability' that draws just three distinctions — 'impossible', 'possible' and 'certain' — and which leads to major systemic mistakes. But with the right mindset and training we can become capable of accurately discriminating between differences as fine as 56% as against 57% likely. • Links to learn more, summary and full transcript • The calibration training app • Sign up for the Civ-5 counterfactual forecasting tournament • A review of the evidence on good forecasting practices • Learn more about Effective Altruism Global In the aftermath of Iraq and WMDs the US intelligence community hired him to prevent the same ever happening again, and his guide — Superforecasting: The Art and Science of Prediction — became a bestseller back in 2014. That was five years ago. In today's interview, Tetlock explains how his research agenda continues to advance, today using the game Civilization 5 to see how well we can predict what would have happened in elusive counterfactual worlds we never get to see, and discovering how simple algorithms can complement or substitute for human judgement. We discuss how his work can be applied to your personal life to answer high-stakes questions, like how likely you are to thrive in a given career path, or whether your business idea will be a billion-dollar unicorn — or fall apart catastrophically. (To help you get better at figuring those things out, our site now has a training app developed by the Open Philanthropy Project and Clearer Thinking that teaches you to distinguish your '70 percents' from your '80 percents'.) We also bring some tough methodological questions raised by the author of a recent review of the forecasting literature. And we find out what jobs people can take to make improving the reasonableness of decision-making in major institutions that shape the world their profession, as it has been for Tetlock over many decades. We view Tetlock's work as so core to living well that we've brought him back for a second and longer appearance on the show — his first was back in episode 15. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.
It can often feel hopeless to be an activist seeking social change on an obscure issue where most people seem opposed or at best indifferent to you. But according to a new book by Professor Cass Sunstein, they shouldn't despair. Large social changes are often abrupt and unexpected, arising in an environment of seeming public opposition. The Communist Revolution in Russia spread so swiftly it confounded even Lenin. Seventy years later the Soviet Union collapsed just as quickly and unpredictably. In the modern era we have gay marriage, #metoo and the Arab Spring, as well as nativism, Euroskepticism and Hindu nationalism. How can a society that so recently seemed to support the status quo bring about change in years, months, or even weeks? Sunstein — co-author of Nudge, Obama White House official, and by far the most cited legal scholar of the late 2000s — aims to unravel the mystery and figure out the implications in his new book How Change Happens. He pulls together three phenomena which social scientists have studied in recent decades: preference falsification, variable thresholds for action, and group polarisation. If Sunstein is to be believed, together these are a cocktail for social shifts that are chaotic and fundamentally unpredictable. • Links to learn more, summary and full transcript. • 80,000 Hours Annual Review 2018. • How to donate to 80,000 Hours. In brief, people constantly misrepresent their true views, even to close friends and family. They themselves aren't quite sure how socially acceptable their feelings would have to become, before they revealed them, or joined a campaign for social change. And a chance meeting between a few strangers can be the spark that radicalises a handful of people, who then find a message that can spread their views to millions. According to Sunstein, it's "much, much easier" to create social change when large numbers of people secretly or latently agree with you. But 'preference falsification' is so pervasive that it's no simple matter to figure out when that's the case. In today's interview, we debate with Sunstein whether this model of cultural change is accurate, and if so, what lessons it has for those who would like to shift the world in a more humane direction. We discuss: • How much people misrepresent their views in democratic countries. • Whether the finding that groups with an existing view tend towards a more extreme position would stand up in the replication crisis. • When is it justified to encourage your own group to polarise? • Sunstein's difficult experiences as a pioneer of animal rights law. • Whether activists can do better by spending half their resources on public opinion surveys. • Should people be more or less outspoken about their true views? • What might be the next social revolution to take off? • How can we learn about social movements that failed and disappeared? • How to find out what people really think. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the transcript on our site. The 80,000 Hours Podcast is produced by Keiran Harris.
loading
Comments (1)

Sam McMahon

A thought provoking, refreshing podcast

Oct 26th
Reply
loading
Download from Google Play
Download from App Store