DiscoverAXRP - the AI X-risk Research Podcast38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems
38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems

38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems

Update: 2024-11-14
Share

Description

Do language models understand the causal structure of the world, or do they merely note correlations? And what happens when you build a big AI society out of them? In this brief episode, recorded at the Bay Area Alignment Workshop, I chat with Zhijing Jin about her research on these questions.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/11/14/episode-38_0-zhijing-jin-llms-causality-multi-agent-systems.html

FAR.AI: https://far.ai/

FAR.AI on X (aka Twitter): https://x.com/farairesearch

FAR.AI on YouTube: https://www.youtube.com/@FARAIResearch

The Alignment Workshop: https://www.alignment-workshop.com/

 

Topics we discuss, and timestamps:

00:35 - How the Alignment Workshop is

00:47 - How Zhijing got interested in causality and natural language processing

03:14 - Causality and alignment

06:21 - Causality without randomness

10:07 - Causal abstraction

11:42 - Why LLM causal reasoning?

13:20 - Understanding LLM causal reasoning

16:33 - Multi-agent systems

 

Links:

Zhijing's website: https://zhijing-jin.com/fantasy/

Zhijing on X (aka Twitter): https://x.com/zhijingjin

Can Large Language Models Infer Causation from Correlation?: https://arxiv.org/abs/2306.05836

Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents: https://arxiv.org/abs/2404.16698

 

Episode art by Hamish Doodles: hamishdoodles.com

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems

38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems