Mind the Machine

Join Florencio Cano Gabarda in Mind the Machine, where we dive into the critical intersection of AI security and safety. Explore how to protect AI systems from cyber threats, use AI to enhance IT security, and tackle the ethical challenges of AI safety—covering issues like ethics, bias, and trustworthiness. Tune in to navigate the complexities of building secure and safe AI.

Agentic AI Security

In this episode of Mind the Machine, host Florencio Cano talks about the concept of agentic AI, exploring what makes AI systems capable of autonomously performing tasks and the unique security challenges they present. While agentic AI can revolutionize industries, robust security measures are essential to manage the security risks. Two of the risks mentioned in the podcast are the risk of AI agents that interact with the operating systems and those that generate code. References mentioned in this episode: Security Runners article about RCE on Anthropic's Computer Use: https://www.securityrunners.io/post/beyond-rce-autonomous-code-execution-in-agentic-ai Anthropic's Computer Use: https://docs.anthropic.com/en/docs/build-with-claude/computer-use Sandboxing Agentic AI Workflows with WebAssembly: https://developer.nvidia.com/blog/sandboxing-agentic-ai-workflows-with-webassembly Episode about Prompt Injection https://open.spotify.com/episode/0ZH9Q2PQXojnpb8UI2jhuS?si=bfx-QIlnT8eDUrl2a_zM-w

12-23
15:01

AI Pentesting

In this episode we talk about AI Pentesting. We talk about the difference with traditional cybersecurity pentesting. We also talk about benefits and drawbacks of manual and AI automatic pentesting. In the case of AI automatic pentesting, we mention some open source tools to perform it. These are some URLs related to topic mentioned in the episode: Breaking Down Adversarial Machine Learning Attacks Through Red Team Challenges https://boschko.ca/adversarial-ml/ Dreadnode’s Crucible CTF platform https://crucible.dreadnode.io/ PyRIT https://github.com/Azure/PyRIT Garak https://github.com/NVIDIA/garak Project Moonshot https://github.com/aiverify-foundation/moonshot

12-16
23:46

Top 10 Security Architecture Patterns for LLM applications

In this episode, we talk about ten very important security architecture patterns to protect LLM applications. Open source guardrails software mentioned during the episode: TrustyAI Llama Guard Nemo Guardrails Open source model evaluation frameworks mentioned: lm-evaluation-harness Project Moonshot Giskard

12-09
19:51

Prompt injection

In today's podcast, we will talk about what is prompt injection. We will talk about techniques to exploit it and security controls to reduce the risk of it happening.

12-02
19:17

Presentation

In this first episode of Mind the Machine I introduce the podcast and myself, Florencio Cano. The podcast will be about AI security and safety. We will talk about security for AI and also about AI for security. I hope you enjoy it! Please, don't hesitate on contacting me directly by sending me an email to florencio.cano@gmail.com or by contacting me at LinkedIn or Mastodon.

11-04
21:40

Recommend Channels