DiscoverLessWrong (30+ Karma)“Beren’s Essay on Obedience and Alignment” by StanislavKrym
“Beren’s Essay on Obedience and Alignment” by StanislavKrym

“Beren’s Essay on Obedience and Alignment” by StanislavKrym

Update: 2025-11-20
Share

Description

Like Daniel Kokotajlo's coverage of Vitalik's response to AI-2027, I've copied the author's text. This time the essay is actually good, but has little flaws. I also expressed some disagreements with SOTA discourse around the post-AGI utopia.

One question which I have occasionally pondered is: assuming that we actually succeed at some kind of robust alignment of AGI, what is the alignment target we should focus on? In general, this question splits into two basic camps. The first is obedience and corrigibility: the AI system should execute the instructions given to it by humans and not do anything else. It should not refuse orders or try to circumvent what the human wants. The second is value-based alignment: The AI system embodies some set of ethical values and principles. Generally these values include helpfulness so the AI is happy to help humans but only insofar as this conforms to its ethical principles allow otherwise the AI will refuse.

S.K.'s comment: Suppose that mankind instilled a value system into an AI, then understood that this value system is far from optimal and decided to change the system. If mankind fails to do so after the AI becomes transformative, then the AI [...]

The original text contained 2 footnotes which were omitted from this narration.

---


First published:

November 18th, 2025



Source:

https://www.lesswrong.com/posts/QHwuS5ECphbuiskgg/beren-s-essay-on-obedience-and-alignment


---


Narrated by TYPE III AUDIO.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

“Beren’s Essay on Obedience and Alignment” by StanislavKrym

“Beren’s Essay on Obedience and Alignment” by StanislavKrym