DiscoverDeep PapersLost in the Middle: How Language Models Use Long Contexts
Lost in the Middle: How Language Models Use Long Contexts

Lost in the Middle: How Language Models Use Long Contexts

Update: 2023-07-26
Share

Description

Deep Papers is a podcast series featuring deep dives on today’s seminal AI papers and research. Each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning. This episode is led by Sally-Ann DeLucia and Amber Roberts, as they discuss the paper "Lost in the Middle: How Language Models Use Long Contexts."

This paper examines how well language models utilize longer input contexts. The study focuses on multi-document question answering and key-value retrieval tasks. The researchers find that performance is highest when relevant information is at the beginning or end of the context. Accessing information in the middle of long contexts leads to significant performance degradation. Even explicitly long-context models experience decreased performance as the context length increases. The analysis enhances our understanding and offers new evaluation protocols for future long-context models.

Full transcript and more here: https://arize.com/blog/lost-in-the-middle-how-language-models-use-long-contexts-paper-reading/

Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

Comments 
loading
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Lost in the Middle: How Language Models Use Long Contexts

Lost in the Middle: How Language Models Use Long Contexts

Arize AI