DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - #691
How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - #691

How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - #691

Update: 2024-07-011
Share

Description

Today, we're joined by Sarah Bird, chief product officer of responsible AI at Microsoft. We discuss the testing and evaluation techniques Microsoft applies to ensure safe deployment and use of generative AI, large language models, and image generation. In our conversation, we explore the unique risks and challenges presented by generative AI, the balance between fairness and security concerns, the application of adaptive and layered defense strategies for rapid response to unforeseen AI behaviors, the importance of automated AI safety testing and evaluation alongside human judgment, and the implementation of red teaming and governance. Sarah also shares learnings from Microsoft's ‘Tay’ and ‘Bing Chat’ incidents along with her thoughts on the rapidly evolving GenAI landscape.


The complete show notes for this episode can be found at https://twimlai.com/go/691.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - #691

How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - #691

Sam Charrington