DiscoverThe Information BottleneckEP12: Adversarial attacks and compression with Jack Morris
EP12:  Adversarial attacks and compression with Jack Morris

EP12: Adversarial attacks and compression with Jack Morris

Update: 2025-11-03
Share

Description

In this episode of the Information Bottleneck Podcast, we host Jack Morris, a PhD student at Cornell, to discuss adversarial examples (Jack created TextAttack, the first software package for LLM jailbreaking), the Platonic representation hypothesis, the implications of inversion techniques, and the role of compression in language models.

Links:

Jack's Website - https://jxmo.io/

TextAttack - https://arxiv.org/abs/2005.05909

How much do language models memorize? https://arxiv.org/abs/2505.24832

DeepSeek OCR - https://www.arxiv.org/abs/2510.18234

Chapters:

00:00 Introduction and AI News Highlights

04:53 The Importance of Fine-Tuning Models

10:01 Challenges in Open Source AI Models

14:34 The Future of Model Scaling and Sparsity

19:39 Exploring Model Routing and User Experience

24:34 Jack's Research: Text Attack and Adversarial Examples

29:33 The Platonic Representation Hypothesis

34:23 Implications of Inversion and Security in AI

39:20 The Role of Compression in Language Models

44:10 Future Directions in AI Research and Personalization

Comments 
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

EP12:  Adversarial attacks and compression with Jack Morris

EP12: Adversarial attacks and compression with Jack Morris

Ravid Shwartz-Ziv & Allen Roush