DiscoverResearch SaturdayPrompts gone rogue.
Prompts gone rogue.

Prompts gone rogue.

Update: 2024-08-10
Share

Description

Shachar Menashe, Senior Director of Security Research at JFrog, is talking about "When Prompts Go Rogue: Analyzing a Prompt Injection Code Execution in Vanna.AI." A security vulnerability in the Vanna.AI tool, called CVE-2024-5565, allows hackers to exploit large language models (LLMs) by manipulating user input to execute malicious code, a method known as prompt injection.

This poses a significant risk when LLMs are connected to critical functions, highlighting the need for stronger security measures.

The research can be found here:

Learn more about your ad choices. Visit megaphone.fm/adchoices

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Prompts gone rogue.

Prompts gone rogue.

N2K Networks