DiscoverLessWrong (30+ Karma)“A Case for Model Persona Research” by nielsrolf, Maxime Riché, Daniel Tan
“A Case for Model Persona Research” by nielsrolf, Maxime Riché, Daniel Tan

“A Case for Model Persona Research” by nielsrolf, Maxime Riché, Daniel Tan

Update: 2025-12-15
Share

Description

Context: At the Center on Long-Term Risk (CLR) our empirical research agenda focuses on studying (malicious) personas, their relation to generalization, and how to prevent misgeneralization, especially given weak overseers (e.g., undetected reward hacking) or underspecified training signals. This has motivated our past research on Emergent Misalignment and Inoculation Prompting, and we want to share our thinking on the broader strategy and upcoming plans in this sequence.

TLDR:

  • Ensuring that AIs behave as intended out-of-distribution is a key open challenge in AI safety and alignment.
  • Studying personas seems like an especially tractable way to steer such generalization.
  • Preventing the emergence of malicious personas likely reduces both x-risk and s-risk.

Why was Bing Chat for a short time prone to threatening its users, being jealous of their wife, or starting fights about the date? What makes Claude Opus 3 special, even though it's not the smartest model by today's standards? And why do models sometimes turn evil when finetuned on unpopular aesthetic preferences , or when they learned to reward hack? We think that these phenomena are related to how personas are represented in LLMs, and how they shape generalization.

Influencing generalization towards desired outcomes.

Many technical AI safety [...]

---

Outline:

(01:32 ) Influencing generalization towards desired outcomes.

(02:43 ) Personas as a useful abstraction for influencing generalization

(03:54 ) Persona interventions might work where direct approaches fail

(04:49 ) Alignment is not a binary question

(05:47 ) Limitations

(07:57 ) Appendix

(08:01 ) What is a persona, really?

(09:17 ) How Personas Drive Generalization

The original text contained 4 footnotes which were omitted from this narration.

---


First published:

December 15th, 2025



Source:

https://www.lesswrong.com/posts/kCtyhHfpCcWuQkebz/a-case-for-model-persona-research


---


Narrated by TYPE III AUDIO.


---

Images from the article:

Flowchart showing five stages of training Claude 3.5 Haiku using reinforcement learning from human feedback.

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Comments 
loading
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“A Case for Model Persona Research” by nielsrolf, Maxime Riché, Daniel Tan

“A Case for Model Persona Research” by nielsrolf, Maxime Riché, Daniel Tan