DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668
Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668

Update: 2024-01-22
Share

Description

Today we’re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben’s recent Fawkes, Glaze, and Nightshade projects, which use “poisoning” approaches to provide users with security and protection against AI encroachments. The first tool we discuss, Fawkes, imperceptibly “cloaks” images in such a way that models perceive them as highly distorted, effectively shielding individuals from recognition by facial recognition models. We then dig into Glaze, a tool that employs machine learning algorithms to compute subtle alterations that are indiscernible to human eyes but adept at tricking the models into perceiving a significant shift in art style, giving artists a unique defense against style mimicry. Lastly, we cover Nightshade, a strategic defense tool for artists akin to a 'poison pill' which allows artists to apply imperceptible changes to their images that effectively “breaks” generative AI models that are trained on them.


The complete show notes for this episode can be found at twimlai.com/go/668.

Comments 
In Channel
loading
Download from Google Play
Download from App Store
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668

Sam Charrington