Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

Update: 2023-10-091
Share

Description

Today we’re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science. We explore the grounding problem, the need for visual grounding and embodiment in text-based models, the advantages of discretization tokenization in image generation, and his paper Scaling Laws for Generative Mixed-Modal Language Models, which focuses on simultaneously training LLMs on various modalities. Additionally, we cover his papers on Self-Alignment with Instruction Backtranslation, and LIMA: Less Is More for Alignment.


The complete show notes for this episode can be found at twimlai.com/go/650.

Comments (1)

ali ghanbarzade

It was fantastic! Thank u very much!

Nov 21st
Reply
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650

Sam Charrington