AI Gone Rogue? LLM Werewolf Showdown
Description
What happens when AI learns to lie? In this electrifying episode of AI Native Dev, Simon Maple is joined by the brilliant Macey Baker, Community Engineer at Tessl, to unravel the wild, unpredictable, and sometimes downright hilarious world of Large Language Models (LLMs) in social deception games.
From the psychological mind games of Werewolf to the cutthroat negotiations of Split or Steal, Macey spills the details on how AI models like OpenAI’s GPT-4o, Anthropic’s Sonnet, Llama, and DeepSeek R1 navigate deception, trust, and ethics when the stakes are high. Can an AI manipulate, bluff, or betray? Which model is the sneakiest, and which one folds under pressure?
Prepare for shocking twists, unexpected strategies, and an eye-opening look at the ethical implications of AI in interactive gameplay. This is one episode you won’t want to miss—especially if you think you can outwit an AI!
Watch the episode on YouTube: https://youtu.be/9hVeVYlqyBc
Join the AI Native Dev Community on Discord: https://tessl.co/4ghikjh
Ask us questions: podcast@tessl.io