The technical, societal, and cultural challenges that come with the rise of fake media
Description
In this episode of the Data Show, I spoke with Siwei Lyu, associate professor of computer science at the University at Albany, State University of New York. Lyu is a leading expert in digital media forensics, a field of research into tools and techniques for analyzing the authenticity of media files. Over the past year, there have been many stories written about the rise of tools for creating fake media (mainly images, video, audio files). Researchers in digital image forensics haven’t exactly been standing still, though. As Lyu notes, advances in machine learning and deep learning have also found a receptive audience among the forensics community.
We had a great conversation spanning many topics including:
- The many indicators used by forensic experts and forgery detection systems
- Balancing “open” research with risks that come with it—including “tipping off” adversaries
- State-of-the-art detection tools today, and what the research community and funding agencies are working on over the next few years.
- Technical, societal, and cultural challenges that come with the rise of fake media.
Here are some highlights from our conversation:
Imbalance between digital forensics researchers and forgers
In theory, it looks difficult to synthesize media. This is true, but on the other hand, there are factors to consider on the side of the forgers. The first is the fact that most people working in forensics, like myself, usually just write a paper and publish it. So, the details of our detection algorithm becomes available immediately. On the other hand, people making fake media are usually secretive; they don’t usually publish the details of their algorithms. So, there’s a kind of imbalance between the information on the forensic side and the forgery side.
The other issue is user habit. The fact that even if some of the fakes are very low quality, a typical user checks it just for a second; sees something interesting, exciting, sensational; and helps distribute it without actually checking the authenticity. This actually helps fake media to broadcast very, very fast. Even though we have algorithms to detect fake media, these tools are probably not fast enough to actually stop the trap.
… Then there are the actual incentives for this kind of work. For forensics, even if we have the tools and the time to catch a piece of fake media, we don’t get anything. But for people actually making the fake media, there is more financial or other forms of incentive to do that.
Related resources:
- Supasorn Suwajanakorn on “Building artificial people: Endless possibilities and the dark side”
- Alyosha Efros on “Using computer vision to understand big visual data”
- “Overcoming barriers to AI adoption”
- “What is neural architecture search?”
- Alon Kaufman on “Machine learning on encrypted data”
- Sharad Goel and Sam Corbett-Davies on “Why it’s hard to design fair machine learning models”