DiscoverThe Quanta PodcastWhen ChatGPT Broke an Entire Field
When ChatGPT Broke an Entire Field

When ChatGPT Broke an Entire Field

Update: 2025-07-291
Share

Description

The study of natural language processing, or NLP, dates back to the 1940s. It gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. In less than five years, large language models broke NLP and made it anew.

In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?

Recently, John Pavlus interviewed 19 current and former NLP researchers to tell that story. In this episode, Pavlus speaks with host and Quanta editor in chief Samir Patel about this oral history of “When ChatGPT Broke an Entire Field.”

Each week on The Quanta Podcast, Quanta Magazine editor in chief Samir Patel speaks with the people behind the award-winning publication to navigate through some of the most important and mind-expanding questions in science and math.

Comments (1)

Eric Everitt

Interviewee.. please stop laughing on every sentence. If you're nervous or scared, dont go on the show.

Jul 29th
Reply
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

When ChatGPT Broke an Entire Field

When ChatGPT Broke an Entire Field

Quanta Magazine