DiscoverApproximately Correct: An AI Podcast from AmiiThe Future of LLMs: Smaller, Faster, Smarter
The Future of LLMs: Smaller, Faster, Smarter

The Future of LLMs: Smaller, Faster, Smarter

Update: 2025-02-18
Share

Description

Discover the secret to training AI with less data! 

On this episode of Approximately Correct, we talk with Amii Fellow and Canada CIFAR AI Chair Lili Mou about the challenges of training large language models and how his research on Flora addresses memory footprint concerns. 


Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

The Future of LLMs: Smaller, Faster, Smarter

The Future of LLMs: Smaller, Faster, Smarter

Amii - Alberta Machine Intelligence Institute