DiscoverThe MLSecOps PodcastAI Security: Vulnerability Detection and Hidden Model File Risks
AI Security: Vulnerability Detection and Hidden Model File Risks

AI Security: Vulnerability Detection and Hidden Model File Risks

Update: 2024-12-09
Share

Description

Send us a text

In this episode of the MLSecOps Podcast, the team dives into the transformative potential of Vulnhuntr: zero shot vulnerability discovery using LLMs. Madison Vorbrich hosts Dan McInerney and Marcello Salvati to discuss Vulnhuntr’s ability to autonomously identify vulnerabilities, including zero-days, using large language models (LLMs) like Claude. They explore the evolution of AI tools for security, the gap between traditional and AI-based static code analysis, and how Vulnhuntr enables both developers and security teams to proactively safeguard their projects. The conversation also highlights Protect AI’s bug bounty platform, huntr.com, and its expansion into model file vulnerabilities (MFVs), emphasizing the critical need to secure AI supply chains and systems.

Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.

Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models

Recon: Automated Red Teaming for GenAI

Protect AI’s ML Security-Focused Open Source Tools

LLM Guard Open Source Security Toolkit for LLM Interactions

Huntr - The World's First AI/Machine Learning Bug Bounty Platform

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

AI Security: Vulnerability Detection and Hidden Model File Risks

AI Security: Vulnerability Detection and Hidden Model File Risks

MLSecOps.com