DiscoverThe MLSecOps PodcastEvaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations
Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

Update: 2023-11-28
Share

Description

Send us a text

In this episode, co-hosts Badar Ahmed and Daryan Dehghanpisheh are joined by Drew Farris (Principal, Booz Allen Hamilton) and Edward Raff (Chief Scientist, Booz Allen Hamilton) to discuss themes from their paper, "You Don't Need Robust Machine Learning to Manage Adversarial Attack Risks," co-authored with Michael Benaroch.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

Evaluating Real-World Adversarial ML Attack Risks and Effective Management: Robustness vs Non-ML Mitigations

MLSecOps.com