DiscoverTowards Data Science
Towards Data Science
Claim Ownership

Towards Data Science

Author: The TDS team

Subscribed: 859Played: 15,408
Share

Description

A Medium publication sharing concepts, ideas, and codes.
90 Episodes
Reverse
Few would disagree that AI is set to become one of the most important economic and social forces in human history. But along with its transformative potential has come concern about a strange new risk that AI might pose to human beings. As AI systems become exponentially more capable of achieving their goals, some worry that even a slight misalignment between those goals and our own could be disastrous. These concerns are shared by many of the most knowledgeable and experienced AI specialists, at leading labs like OpenAI, DeepMind, CHAI Berkeley, Oxford and elsewhere. But they’re not universal: I recently had Melanie Mitchell — computer science professor and author who famously debated Stuart Russell on the topic of AI risk — on the podcast to discuss her objections to the AI catastrophe argument. And on this episode, we’ll continue our exploration of the case for AI catastrophic risk skepticism with an interview with Oren Etzioni, CEO of the Allen Institute for AI, a world-leading AI research lab that’s developed many well-known projects, including the popular AllenNLP library, and Semantic Scholar. Oren has a unique perspective on AI risk, and the conversation was lots of fun!
How can you know that a super-intelligent AI is trying to do what you asked it to do? The answer, it turns out, is: not easily. And unfortunately, an increasing number of AI safety researchers are warning that this is a problem we’re going to have to solve sooner rather than later, if we want to avoid bad outcomes — which may include a species-level catastrophe. The type of failure mode whereby AIs optimize for things other than those we ask them to is known as an inner alignment failure in the context of AI safety. It’s distinct from outer alignment failure, which is what happens when you ask your AI to do something that turns out to be dangerous, and it was only recognized by AI safety researchers as its own category of risk in 2019. And the researcher who led that effort is my guest for this episode of the podcast, Evan Hubinger. Evan is an AI safety veteran who’s done research at leading AI labs like OpenAI, and whose experience also includes stints at Google, Ripple and Yelp. He currently works at the Machine Intelligence Research Institute (MIRI) as a Research Fellow, and joined me to talk about his views on AI safety, the alignment problem, and whether humanity is likely to survive the advent of superintelligent AI.
When OpenAI announced the release of their  GPT-3 API last year, the tech world was shocked. Here was a language model, trained only to perform a simple autocomplete task, which turned out to be capable of language translation, coding, essay writing, question answering and many other tasks that previously would each have required purpose-built systems. What accounted for GPT-3’s ability to solve these problems? How did it beat state-of-the-art AIs that were purpose-built to solve tasks it was never explicitly trained for? Was it a brilliant new algorithm? Something deeper than deep learning? Well… no. As algorithms go, GPT-3 was relatively simple, and was built using a by-then fairly standard transformer architecture. Instead of a fancy algorithm, the real difference between GPT-3 and everything that came before was size: GPT-3 is a simple-but-massive, 175B-parameter model, about 10X bigger than the next largest AI system. GPT-3 is only the latest in a long line of results that now show that scaling up simple AI techniques can give rise to new behavior, and far greater capabilities. Together, these results have motivated a push toward AI scaling: the pursuit of ever larger AIs, trained with more compute on bigger datasets. But scaling is expensive: by some estimates, GPT-3 cost as much as $5M to train. As a result, only well-resources companies like Google, OpenAI and Microsoft have been able to experiment with scaled models. That’s a problem for independent AI safety researchers, who want to better understand how advanced AI systems work, and what their most dangerous behaviors might be, but who can’t afford a $5M compute budget. That’s why a recent paper by Andy Jones, an independent researcher specialized in AI scaling, is so promising: Andy’s paper shows that, at least in some contexts, the capabilities of large AI systems can be predicted from those of smaller ones. If the result generalizes, it could give independent researchers the ability to run cheap experiments on small systems, which nonetheless generalize to expensive, scaled AIs like GPT-3. Andy was kind enough to join me for this episode of the podcast.
In 2016, OpenAI published a blog describing the results of one of their AI safety experiments. In it, they describe how an AI that was trained to maximize its score in a boat racing game ended up discovering a strange hack: rather than completing the race circuit as fast as it could, the AI learned that it could rack up an essentially unlimited number of bonus points by looping around a series of targets, in a process that required it to ram into obstacles, and even travel in the wrong direction through parts of the circuit. This is a great example of the alignment problem: if we’re not extremely careful, we risk training AIs that find dangerously creative ways to optimize whatever thing we tell them to optimize for. So building safe AIs — AIs that are aligned with our values — involves finding ways to very clearly and correctly quantify what we want our AIs to do. That may sound like a simple task, but it isn’t: humans have struggled for centuries to define “good” metrics for things like economic health or human flourishing, with very little success. Today’s episode of the podcast features Brian Christian — the bestselling author of several books related to the connection between humanity and computer science & AI. His most recent book, The Alignment Problem, explores the history of alignment research, and the technical and philosophical questions that we’ll have to answer if we’re ever going to safely outsource our reasoning to machines. Brian’s perspective on the alignment problem links together many of the themes we’ve explored on the podcast so far, from AI bias and ethics to existential risk from AI.
We all value privacy, but most of us would struggle to define it. And there’s a good reason for that: the way we think about privacy is shaped by the technology we use. As new technologies emerge, which allow us to trade data for services, or pay for privacy in different forms, our expectations shift and privacy standards evolve. That shifting landscape makes privacy a moving target. The challenge of understanding and enforcing privacy standards isn’t novel, but it’s taken on a new importance given the rapid progress of AI in recent years. Data that would have been useless just a decade ago — unstructured text data and many types of images come to mind — are now a treasure trove of value, for example. Should companies have the right to use data they originally collected at a time when its value was limited, when it no longer is? Do companies have an obligation to provide maximum privacy without charging their customers directly for it? Privacy in AI is as much a philosophical question as a technical one, and to discuss it, I was joined by Eliano Marques, Executive VP of Data and AI at Protegrity, a company that specializes in privacy and data protection for large companies. Eliano has worked in data privacy for the last decade.
When OpenAI developed its GPT-2 language model in early 2019, they initially chose not to publish the algorithm, owing to concerns over its potential for malicious use, as well as the need for the AI industry to experiment with new, more responsible publication practices that reflect the increasing power of modern AI systems. This decision was controversial, and remains that way to some extent even today: AI researchers have historically enjoyed a culture of open publication and have defaulted to sharing their results and algorithms. But whatever your position may be on algorithms like GPT-2, it’s clear that at some point, if AI becomes arbitrarily flexible and powerful, there will be contexts in which limits on publication will be important for public safety. The issue of publication norms in AI is complex, which is why it’s a topic worth exploring with people who have experience both as researchers, and as policy specialists — people like today’s Towards Data Science podcast guest, Rosie Campbell. Rosie is the Head of Safety Critical AI at Partnership on AI (PAI), a nonprofit that brings together startups, governments, and big tech companies like Google, Facebook, Microsoft and Amazon, to shape best practices, research, and public dialogue about AI’s benefits for people and society. Along with colleagues at PAI, Rosie recently finished putting together a white paper exploring the current hot debate over publication norms in AI research, and making recommendations for researchers, journals and institutions involved in AI research.
Automated weapons mean fewer casualties, faster reaction times, and more precise strikes. They’re a clear win for any country that deploys them. You can see the appeal. But they’re also a classic prisoner’s dilemma. Once many nations have deployed them, humans no longer have to be persuaded to march into combat, and the barrier to starting a conflict drops significantly. The real risks that come from automated weapons systems like drones aren’t always the obvious ones. Many of them take the form of second-order effects — the knock-on consequences that come from setting up a world where multiple countries have large automated forces. But what can we do about them? That’s the question we’ll be taking on during this episode of the podcast with Jakob Foerster, an early pioneer in multi-agent reinforcement learning, and incoming faculty member at the University of Toronto. Jakob has been involved in the debate over weaponized drone automation for some time, and recently wrote an open letter to German politicians urging them to consider the risks associated with the deployment of this technology.
In December 1938, a frustrated nuclear physicist named Leo Szilard wrote a letter to the British Admiralty telling them that he had given up on his greatest invention — the nuclear chain reaction. "The idea of a nuclear chain reaction won’t work. There’s no need to keep this patent secret, and indeed there’s no need to keep this patent too. It won’t work." — Leo Szilard What Szilard didn’t know when he licked the envelope was that, on that very same day, a research team in Berlin had just split the uranium atom for the very first time. Within a year, the Manhatta Project would begin, and by 1945, the first atomic bomb was dropped on the Japanese city of Hiroshima. It was only four years later — barely a decade after Szilard had written off the idea as impossible — that Russia successfully tested its first atomic weapon, kicking off a global nuclear arms race that continues in various forms to this day. It’s a surprisingly short jump from cutting edge technology to global-scale risk. But although the nuclear story is a high-profile example of this kind of leap, it’s far from the only one. Today, many see artificial intelligence as a class of technology whose development will lead to global risks — and as a result, as a technology that needs to be managed globally. In much the same way that international treaties have allowed us to reduce the risk of nuclear war, we may need global coordination around AI to mitigate its potential negative impacts. One of the world’s leading experts on AI’s global coordination problem is Nicolas Miailhe. Nicolas is the co-founder of The Future Society, a global nonprofit whose primary focus is encouraging responsible adoption of AI, and ensuring that countries around the world come to a common understanding of the risks associated with it. Nicolas is a veteran of the prestigious Harvard Kennedy School of Government, an appointed expert to the Global Partnership on AI, and advises cities, governments, international organizations about AI policy.
We’ve recorded quite a few podcasts recently about the problems AI does and may create, now and in the future. We’ve talked about AI safety, alignment, bias and fairness. These are important topics, and we’ll continue to discuss them, but I also think it’s important not to lose sight of the value that AI and tools like it bring to the world in the here and now. So for this episode of the podcast, I spoke with Dr Yan Li, a professor who studies data management and analytics, and the co-founder of Techies Without Borders, a nonprofit dedicated to using tech for humanitarian good. Yan has firsthand experience developing and deploying technical solutions for use in poor countries around the world, from Tibet to Haiti.
AI safety researchers are increasingly focused on understanding what AI systems want. That may sound like an odd thing to care about: after all, aren’t we just programming AIs to want certain things by providing them with a loss function, or a number to optimize? Well, not necessarily. It turns out that AI systems can have incentives that aren’t necessarily obvious based on their initial programming. Twitter, for example, runs a recommender system whose job is nominally to figure out what tweets you’re most likely to engage with. And while that might make you think that it should be optimizing for matching tweets to people, another way Twitter can achieve its goal is by matching people to tweets — that is, making people easier to predict, by nudging them towards simplistic and partisan views of the world. Some have argued that’s a key reason that social media has had such a divisive impact on online political discourse. So the incentives of many current AIs already deviate from those of their programmers in important and significant ways — ways that are literally shaping society. But there’s a bigger reason they matter: as AI systems continue to develop more capabilities, inconsistencies between their incentives and our own will become more and more important. That’s why my guest for this episode, Ryan Carey, has focused much of his research on identifying and controlling the incentives of AIs. Ryan is a former medical doctor, now pursuing a PhD in machine learning and doing research on AI safety at Oxford University’s Future of Humanity Institute.
As AI systems have become more powerful, an increasing number of people have been raising the alarm about its potential long-term risks. As we’ve covered on the podcast before, many now argue that those risks could even extend to the annihilation of our species by superhuman AI systems that are slightly misaligned with human values. There’s no shortage of authors, researchers and technologists who take this risk seriously — and they include prominent figures like Eliezer Yudkowsky, Elon Musk, Bill Gates, Stuart Russell and Nick Bostrom. And while I think the arguments for existential risk from AI are sound, and aren’t widely enough understood, I also think that it’s important to explore more skeptical perspectives. Melanie Mitchell is a prominent and important voice on the skeptical side of this argument, and she was kind enough to join me for this episode of the podcast. Melanie is the Davis Professor of complexity at the Santa Fe Institute, a Professor of computer science at Portland State University, and the author of Artificial Intelligence: a Guide for Thinking Humans — a book in which she explores arguments for AI existential risk through a critical lens. She’s an active player in the existential risk conversation, and recently participated in a high-profile debate with Stuart Russell, arguing against his AI risk position.
Powered by Moore’s law, and a cluster of related trends, technology has been improving at an exponential pace across many sectors. AI capabilities in particular have been growing at a dizzying pace, and it seems like every year brings us new breakthroughs that would have been unimaginable just a decade ago. GPT-3, AlphaFold and DALL-E were developed in the last 12 months — and all of this in a context where the leading machine learning model has been increasing in size tenfold every year for the last decade. To many, there’s a sharp contrast between the breakneck pace of these advances and the rate at which the laws that govern technologies like AI evolves. Our legal systems are chock full of outdated laws, and politicians and regulators often seem almost comically behind the technological curve. But while there’s no question that regulators face an uphill battle in trying to keep up with a rapidly changing tech landscape, my guest today thinks they have a good shot of doing so — as long as they start to think about the law a bit differently. His name is Josh Fairfield, and he’s a law and technology scholar and former director of R&D at pioneering edtech company Rosetta Stone. Josh has consulted with U.S. government agencies, including the White House Office of Technology and the Homeland Security Privacy Office, and literally wrote a book about the strategies policymakers can use to keep up with tech like AI.
Paradoxically, it may be easier to predict the far future of humanity than to predict our near future. The next fad, the next Netflix special, the next President — all are nearly impossible to anticipate. That’s because they depend on so many trivial factors: the next fad could be triggered by a viral video someone filmed on a whim, and well, the same could be true of the next Netflix special or President for that matter. But when it comes to predicting the far future of humanity, we might oddly be on more solid ground. That’s not to say predictions can be made with confidence, but at least they can be made based on economic analysis and first principles reasoning. And most of that analysis and reasoning points to one of two scenarios: we either attain heights we’ve never imagined as a species, or everything we care about gets wiped out in a cosmic scale catastrophe. Few people have spent more time thinking about the possible endgame of human civilization as my guest for this episode of the podcast, Stuart Armstrong. Stuart is a Research Fellow at Oxford University’s Future of Humanity Institute, where he studies the various existential risks that face our species, focusing most of his work specifically on risks from AI. Stuart is a fascinating and well-rounded thinker with a fresh perspective to share on just about everything you could imagine, and I highly recommend giving the episode a listen.
For the past decade, progress in AI has mostly been driven by deep learning — a field of research that draws inspiration directly from the structure and function of the human brain. By drawing an analogy between brains and computers, we’ve been able to build computer vision, natural language and other predictive systems that would have been inconceivable just ten years ago. But analogies work two ways. Now that we have self-driving cars and AI systems that regularly outperform humans at increasingly complex tasks, some are wondering whether reversing the usual approach — and drawing inspiration from AI to inform out approach to neuroscience — might be a promising strategy. This more mathematical approach to neuroscience is exactly what today’s guest, Georg Nortoff, is working on. Georg is a professor of neuroscience, psychiatry, and philosophy at the University of Ottawa, and as part of his work developing a more mathematical foundation for neuroscience, he’s explored a unique and intriguing theory of consciousness that he thinks might serve as a useful framework for developing more advanced AI systems that will benefit human beings.
Most AI researchers are confident that we will one day create superintelligent systems — machines that can significantly outperform humans across a wide variety of tasks. If this ends up happening, it will pose some potentially serious problems. Specifically: if a system is superintelligent, how can we maintain control over it? That’s the core of the AI alignment problem — the problem of aligning advanced AI systems with human values. A full solution to the alignment problem will have to involve at least two things. First, we’ll have to know exactly what we want superintelligent systems to do, and make sure they don’t misinterpret us when we ask them to do it (the “outer alignment” problem). But second, we’ll have to make sure that those systems are genuinely trying to optimize for what we’ve asked them to do, and that they aren’t trying to deceive us (the “inner alignment” problem). Creating systems that are inner-aligned and superintelligent might seem like different problems — and many think that they are. But in the last few years, AI researchers have been exploring a new family of strategies that some hope will allow us to achieve both superintelligence and inner alignment at the same time. Today’s guest, Ethan Perez, is using these approaches to build language models that he hopes will form an important part of the superintelligent systems of the future. Ethan has done frontier research at Google, Facebook, and MILA, and is now working full-time on developing learning systems with generalization abilities that could one day exceed those of human beings.
There’s a minor mystery in economics that may suggest that things are about to get really, really weird for humanity. And that mystery is this: many economic models predict that, at some point, human economic output will become infinite. Now, infinities really don’t tend to happen in the real world. But when they’re predicted by otherwise sound theories, they tend to indicate a point at which the assumptions of these theories break down in some fundamental way. Often, that’s because of things like phase transitions: when gases condense or liquids evaporate, some of their thermodynamic parameters go to infinity — not because anything “infinite” is really happening, but because the equations that define a gas cease to apply when those gases become liquids and vice-versa. So how should we think of economic models that tell us that human economic output will one day reach infinity? Is it reasonable to interpret them as predicting a phase transition in the human economy — and if so, what might that transition look like? These are hard questions to answer, but they’re questions that my guest David Roodman, a Senior Advisor at Open Philanthropy, has thought about a lot. David has centered his investigations on what he considers to be a plausible culprit for a potential economic phase transition: the rise of transformative AI technology. His work explores a powerful way to think about how, and even when, transformative AI may change how the economy works in a fundamental way.
As AI systems have become more ubiquitous, people have begun to pay more attention to their ethical implications. Those implications are potentially enormous: Google’s search algorithm and Twitter’s recommendation system each have the ability to meaningfully sway public opinion on just about any issue. As a result, Google and Twitter’s choices have an outsized impact — not only on their immediate user base, but on society in general. That kind of power comes with risk of intentional misuse (for example, Twitter might choose to boost tweets that express views aligned with their preferred policies). But while intentional misuse is an important issue, equally challenging is the problem of avoiding unintentionally bad outputs from AI systems. Unintentionally bad AIs can lead to various biases that make algorithms perform better for some people than for others, or more generally to systems that are optimizing for things we actually don’t want in the long run. For example, platforms like Twitter and YouTube have played an important role in the increasing polarization of their US (and worldwide) user bases. They never intended to do this, of course, but their effect on social cohesion is arguably the result of internal cultures based on narrow metric optimization: when you optimize for short-term engagement, you often sacrifice long-term user well-being. The unintended consequences of AI systems are hard to predict, almost by definition. But their potential impact makes them very much worth thinking and talking about — which is why I sat down with Stanford professor, co-director of the Women in Data Science (WiDS) initiative, and host of the WiDS podcast Margot Gerritsen for this episode of the podcast.
As we continue to develop more and more sophisticated AI systems, an increasing number of economists, technologists and futurists have been trying to predict what the likely end point of all this progress might be. Will human beings be irrelevant? Will we offload all of our decisions — from what we want to do with our spare time, to how we govern societies — to machines? And what is the emergence of highly capable and highly general AI systems mean for the future of democracy and governance? These questions are impossible to answer completely and directly, but it  may be possible to get some hints by taking a long-term view at the history of human technological development. That’s a strategy that my guest, Ben Garfinkel, is applying in his research on the future of AI. Ben is a physicist and mathematician who now does research on forecasting risks from emerging technologies at Oxford’s Future of Humanity Institute. Apart from his research on forecasting the future impact of technologies like AI, Ben has also spent time exploring some classic arguments for AI risk, many of which he disagrees with. Since we’ve had a number of guests on the podcast who do take these risks seriously, I thought it would be worth speaking to Ben about his views as well, and I’m very glad I did.
There’s no question that AI ethics has received a lot of well-deserved attention lately. But ask the average person what ethics AI means, and you’re as likely as not to get a blank stare. I think that’s largely because every data science or machine learning problem comes with a unique ethical context, so it can be hard to pin down ethics principles that generalize to a wide class of AI problems. Fortunately, there are researchers who focus on just this issue— and my guest today, Sarah Williams, is one of them. Sarah is an associate professor of urban planning and the director of the Civic Data Design Lab at MIT’s School of Architecture and Planning School. Her job is to study applications of data science to urban planning, and to work with policymakers on applying AI in an ethical way. Through that process, she’s distilled several generalizable AI ethics principles that have practical and actionable implications. This episode was a wide-ranging discussion about everything from the way our ideologies can colour our data analysis to the challenges governments face when trying to regulate AI.
The apparent absence of alien life in our universe has been a source of speculation and controversy in scientific circles for decades. If we assume that there’s even a tiny chance that intelligent life might evolve on a given planet, it seems almost impossible to imagine that the cosmos isn’t brimming with alien civilizations. So where are they? That’s what Anders Sandberg calls the “Fermi Question”: given the unfathomable size of the universe, how come we have seen no signs of alien life? Anders is a researcher at the University of Oxford’s Future of Humanity Institute, where he tries to anticipate the ethical, philosophical and practical questions that human beings are going to have to face as we approach what could be a technologically unbounded future. That work focuses to a great extent on superintelligent AI and the existential risks it might create. As part of that work, he’s studied the Fermi Question in great detail, and what it implies for the scarcity of life and the value of the human species.
loading
Comments 
Download from Google Play
Download from App Store