DiscoverData DrivenNiv Braun on AI Security Measures and Emerging Threats
Niv Braun on AI Security Measures and Emerging Threats

Niv Braun on AI Security Measures and Emerging Threats

Update: 2025-01-14
Share

Description

 In today's episode, we're thrilled to have Niv Braun, co-founder and CEO of Noma Security, join us as we tackle some pressing issues in AI security.

With the rapid adoption of generative AI technologies, the landscape of data security is evolving at breakneck speed. We'll explore the increasing need to secure systems that handle sensitive AI data and pipelines, the rise of AI security careers, and the looming threats of adversarial attacks, model "hallucinations," and more. Niv will share his insights on how companies like Noma Security are working tirelessly to mitigate these risks without hindering innovation.

We'll also dive into real-world incidents, such as compromised open-source models and the infamous PyTorch breach, to illustrate the critical need for improved security measures. From the importance of continuous monitoring to the development of safer formats and the adoption of a zero trust approach, this episode is packed with valuable advice for organizations navigating the complex world of AI security.

So, whether you're a data scientist, AI engineer, or simply an enthusiast eager to learn more about the intersection of AI and security, this episode promises to offer a wealth of information and practical tips to help you stay ahead in this rapidly changing field. Tune in and join the conversation as we uncover the state of AI security and what it means for the future of technology.

Quotable Moments

00:00 Security spotlight shifts to data and AI.

03:36 Protect against misconfigurations, adversarial attacks, new risks.

09:17 Compromised model with undetectable data leaks.

12:07 Manual parsing needed for valid, malicious code detection.

15:44 Concerns over Agiface models may affect jobs.

20:00 Combines self-developed and third-party AI models.

20:55 Ensure models don't use sensitive or unauthorized data.

25:55 Zero Trust: mindset, philosophy, implementation, security framework.

30:51 LLM attacks will have significantly higher impact.

34:23 Need better security awareness, exposed secrets risk.

35:50 Be organized with visibility and governance.

39:51 Red teaming for AI security and safety.

44:33 Gen AI primarily used by consumers, not businesses.

47:57 Providing model guardrails and runtime protection services.

50:53 Ensure flexible, configurable architecture for varied needs.

52:35 AI, security, innovation discussed by Niamh Braun.

Comments 
In Channel
loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Niv Braun on AI Security Measures and Emerging Threats

Niv Braun on AI Security Measures and Emerging Threats

Data Driven