DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378
Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378

Update: 2020-05-25
Share

Description

Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. 

Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which explores compute-efficient training strategies, based on model size.

We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency? We also discuss the parallels between computer vision and NLP tasks, how he characterizes both “larger” and “faster” in the paper.

Check out the complete show notes for this episode at twimlai.com/talk/378.

Comments 
In Channel
loading
Download from Google Play
Download from App Store
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378

Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378

Sam Charrington