What AI "experts" get wrong - ChatGPT Red Teamer speaks
Description
Robert Maciejko, co-Founder of INSEAD 🤖 AI interviews OpenAI Red Teamer and AI Insider Nate Labenz
Nate recently posted "Did I get Sam Altman Fired?"
Nate also was an early "red team" member of ChatGPT-4 & saw raw AI's good and bad sides. He took his concerns to an OpenAI Board member and was shocked to learn they had not tried the model. Management released him soon after that. Insider view you have yet to hear.
Hear what many AI "experts" get wrong
Key Takeaway: Alignment & safety do not happen by default.
Topics:
➡️ AI opportunity
➡️ The dark side of models before refinement
(e.g., "How can I kill the most people possible")
➡️ Getting released by OpenAI via Google Meet (like Sam Altman)
➡️ Where is AI already superhuman
➡️ Model strength growing faster than control ability
➡️ How easy it is to hack an AI model
➡️ What's next?
Common AI myths:
➡️ Doomers
➡️ e/acc - Accelerationists
➡️ Open source
➡️ Self-regulation
As co-host of the Cognitive Revolution, Nate also interviews CEOs and senior managers of top AI companies.