AI Safety: Constitutional AI vs Human Feedback
Update: 2024-06-17
1
Description
With great power comes great responsibility. How do leading AI companies implement safety and ethics as language models scale? OpenAI uses Model Spec combined with RLHF (Reinforcement Learning from Human Feedback). Anthropic uses Constitutional AI. The technical approaches to maximizing usefulness while minimizing harm. Solo episode on AI alignment.
REFERENCE
OpenAI Model Spec
https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview
Anthropic Constitutional AI
https://www.anthropic.com/news/claudes-constitution
To stay in touch, sign up for our newsletter at https://www.superprompt.fm
Comments
In Channel




