DiscoverTrainCheckChats #6: Adam Dziedzic: Difficulty of Defending Self-Supervised Learning Against Model Extraction
Chats #6: Adam Dziedzic: Difficulty of Defending Self-Supervised Learning Against Model Extraction

Chats #6: Adam Dziedzic: Difficulty of Defending Self-Supervised Learning Against Model Extraction

Update: 2023-05-23
Share

Description

Adam Dziedzic is a Postdoctoral Fellow at the University of Toronto and Vector Institute, advised by Prof. Nicolas Papernot. His research focus is on secure and trustworthy machine learning, especially model stealing and defenses as well as on collaborative machine learning. Adam finished his Ph.D. at the University of Chicago, advised by Prof. Sanjay Krishnan, where he worked on input and model compression for adaptive and robust neural networks. He obtained his Bachelor's and Master's degrees from Warsaw University of Technology. Adam was also studying at the Technical University of Denmark and EPFL. He worked at CERN, Barclays Investment Bank, Microsoft Research, and Google.

Connect with Adam: https://www.linkedin.com/in/adziedzic/.

Find all episodes of the TrainCheck Podcast at traincheck.ai.

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Chats #6: Adam Dziedzic: Difficulty of Defending Self-Supervised Learning Against Model Extraction

Chats #6: Adam Dziedzic: Difficulty of Defending Self-Supervised Learning Against Model Extraction

Matt Faltyn