DiscoverKnowledge Distillation with Helen ByrneInside OpenAI's trust and safety operation - with Rosie Campbell
Inside OpenAI's trust and safety operation - with Rosie Campbell

Inside OpenAI's trust and safety operation - with Rosie Campbell

Update: 2024-03-07
Share

Description

No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies.
With that elevated position come questions about how OpenAI can ensure its models are not used for malign purposes.
In this interview we talk to Rosie Campbell from OpenAI’s policy research team about the many processes and safeguards in place to prevent abuse. Rosie also talks about the forward-looking work of the policy research team, anticipating longer-term risks that might emerge with more advanced AI systems.
Helen and Rosie discuss the challenges associated with agentic systems (AI that can interface with the wider world via APIs and other technologies), red-teaming new models, and whether advanced AIs should have ‘rights’ in the same way that humans or animals do.

You can read the paper referenced in this episode ‘Practices for Governing Agentic AI Systems’ co-written by Rosie and her colleagues: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf

Watch the video of the interview here: https://www.youtube.com/watch?v=81LNrlEqgcM 

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Inside OpenAI's trust and safety operation - with Rosie Campbell

Inside OpenAI's trust and safety operation - with Rosie Campbell

Helen Byrne