DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569
Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569

Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569

Update: 2022-04-25
Share

Description

Today we’re joined by Irwan Bello, formerly a research scientist at Google Brain, and now on the founding team at a stealth AI startup. We begin our conversation with an exploration of Irwan’s recent paper, Designing Effective Sparse Expert Models, which acts as a design guide for building sparse large language model architectures. We discuss mixture of experts as a technique, the scalability of this method, and it's applicability beyond NLP tasks the data sets this experiment was benchmarked against. We also explore Irwan’s interest in the research areas of alignment and retrieval, talking through interesting lines of work for each area including instruction tuning and direct alignment.

The complete show notes for this episode can be found at twimlai.com/go/569

Comments 
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569

Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569

Sam Charrington