DiscoverData Science at HomeLLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)
LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)

LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)

Update: 2024-09-25
Share

Description

In this episode, we dive into the wild world of Large Language Models (LLMs) and their knack for… making things up. Can they really generalize without throwing in some fictional facts? Or is hallucination just part of their charm?

Let’s separate the genius from the guesswork in this insightful breakdown of AI’s creativity problem.


TL;DR;


LLM Generalisation without hallucinations. Is that possible?


 


References


https://github.com/lamini-ai/Lamini-Memory-Tuning/blob/main/research-paper.pdf


https://www.lamini.ai/blog/lamini-memory-tuning


 


 

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)

LLMs: Totally Not Making Stuff Up (they promise) (Ep. 263)

Francesco Gadaleta