DiscoverThinking Machines: AI & PhilosophyLLM Inference Speed (Tech Deep Dive)
LLM Inference Speed (Tech Deep Dive)

LLM Inference Speed (Tech Deep Dive)

Update: 2023-10-06
Share

Description

In this tech talk, we dive deep into the technical specifics around LLM inference.

The big question is: Why are LLMs slow? How can they be faster? And might slow inference affect UX in the next generation of AI-powered software?


We jump into:

  • Is fast model inference the real moat for LLM companies?
  • What are the implications of slow model inference on the future of decentralized and edge model inference?
  • As demand rises, what will the latency/throughput tradeoff look like?
  • What innovations on the horizon might massively speed up model inference?
Comments 
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

LLM Inference Speed (Tech Deep Dive)

LLM Inference Speed (Tech Deep Dive)

Daniel Reid Cahn