AI Gone Rogue? LLM Werewolf Showdown

AI Gone Rogue? LLM Werewolf Showdown

Update: 2025-03-13
Share

Description

What happens when AI learns to lie? In this electrifying episode of AI Native Dev, Simon Maple is joined by the brilliant Macey Baker, Community Engineer at Tessl, to unravel the wild, unpredictable, and sometimes downright hilarious world of Large Language Models (LLMs) in social deception games.

From the psychological mind games of Werewolf to the cutthroat negotiations of Split or Steal, Macey spills the details on how AI models like OpenAI’s GPT-4o, Anthropic’s Sonnet, Llama, and DeepSeek R1 navigate deception, trust, and ethics when the stakes are high. Can an AI manipulate, bluff, or betray? Which model is the sneakiest, and which one folds under pressure?

Prepare for shocking twists, unexpected strategies, and an eye-opening look at the ethical implications of AI in interactive gameplay. This is one episode you won’t want to miss—especially if you think you can outwit an AI!

Watch the episode on YouTube: https://youtu.be/9hVeVYlqyBc

Join the AI Native Dev Community on Discord: https://tessl.co/4ghikjh

Ask us questions: podcast@tessl.io

Comments 
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

AI Gone Rogue? LLM Werewolf Showdown

AI Gone Rogue? LLM Werewolf Showdown

Tessl

We and our partners use cookies to personalize your experience, to show you ads based on your interests, and for measurement and analytics purposes. By using our website and our services, you agree to our use of cookies as described in our Cookie Policy.