How to secure AI systems
Update: 2023-02-09
Description
With so many artificial systems claiming "intelligence" available to the public, making sure they do what they're designed to is of the utmost importance. Dr. Bruce Draper, Program Manager of the Information Innovation Office at DARPA joins us on this bonus episode of Deep Dive: AI to unpack his work in the field and his current role. We have a fascinating chat with Draper about the risks and opportunities involved in this exciting field, and why growing bigger and more involved Open Source communities is better for everyone. Draper introduces us to the Guaranteeing AI Robustness Against Deception (GARD) Project, its main short-term goals and how these aim to mitigate exposure to danger while we explore the possibilities that machine learning offer. We also spend time discussing the agency's Open Source philosophy and foundation, the AI boom in recent years, why policy making is so critical, the split between academic and corporate contributions, and much more. For Draper, community involvement is critical to spot potential issues and threats. Tune in to hear it all from this exceptional guest! Read the full transcript.
Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.
This podcast is sponsored by GitHub, DataStax and Google.
No sponsor had any right or opportunity to approve or disapprove the content of this podcast.
Key points from this episode:
- The objectives of the GARD project and DARPA's broader mission.
- How the Open Source model plays into the research strategy at DARPA.
- Differences between machine learning and more traditional IT systems.
- Draper talks about his ideas for ideal communities and the role of stakeholders.
- Key factors to the 'extended summer of AI' we have been experiencing.
- Getting involved in the GARD Project and how the community makes the systems more secure.
- The main impetus for the AI community to address these security concerns.
- Draper explains the complications of safety-critical AI systems.
- Deployment opportunities and concurrent development for optimum safety.
- Thoughts on the scope and role of policy makers in the AI security field.
- The need for a deeper theoretical understanding of possible and present threats.
- Draper talks about the broader goal of a self-sustaining Open Source community.
- Plotting the future role and involvement of DARPA in the community.
- The partners that DARPA works with: academic and corporate.
- The story of how Draper got involved with the GARD Project and adversarial AI.
- Looking at the near future for Draper and DARPA.
- Reflections on the last few years in AI and how much of this could have been predicted.
Links mentioned in this episode:
- Dr. Bruce Draper
- DARPA
- Moderna
- ChatGPT
- DALL-E
- Adversarial Robustness Toolbox
- GARD Project
- Carnegie Mellon University
- Embedded Intelligence
- IBM
- Intel Federal LLC
- Johns Hopkins University
- MIT
- Toyota Technological Institute at Chicago
- Two Six Technologies
- University of Central Florida
- University of Maryland
- University of Wisconsin
- USC Information Sciences Institute
- Google Research
- MITRE
Credits
Special thanks to volunteer producer, Nicole Martinelli. Music by Jason Shaw, Audionautix.
This podcast is sponsored by GitHub, DataStax and Google.
No sponsor had any right or opportunity to approve or disapprove the content of this podcast.
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
In Channel