DiscoverMind the Machine
Mind the Machine
Claim Ownership

Mind the Machine

Author: Florencio Cano Gabarda

Subscribed: 0Played: 0
Share

Description

Join Florencio Cano Gabarda in Mind the Machine, where we dive into the critical intersection of AI security and safety. Explore how to protect AI systems from cyber threats, use AI to enhance IT security, and tackle the ethical challenges of AI safety—covering issues like ethics, bias, and trustworthiness. Tune in to navigate the complexities of building secure and safe AI.
5 Episodes
Reverse
Agentic AI Security

Agentic AI Security

2024-12-2315:01

In this episode of Mind the Machine, host Florencio Cano talks about the concept of agentic AI, exploring what makes AI systems capable of autonomously performing tasks and the unique security challenges they present. While agentic AI can revolutionize industries, robust security measures are essential to manage the security risks. Two of the risks mentioned in the podcast are the risk of AI agents that interact with the operating systems and those that generate code. References mentioned in this episode: Security Runners article about RCE on Anthropic's Computer Use: https://www.securityrunners.io/post/beyond-rce-autonomous-code-execution-in-agentic-ai Anthropic's Computer Use: https://docs.anthropic.com/en/docs/build-with-claude/computer-use Sandboxing Agentic AI Workflows with WebAssembly: https://developer.nvidia.com/blog/sandboxing-agentic-ai-workflows-with-webassembly Episode about Prompt Injection https://open.spotify.com/episode/0ZH9Q2PQXojnpb8UI2jhuS?si=bfx-QIlnT8eDUrl2a_zM-w
AI Pentesting

AI Pentesting

2024-12-1623:46

In this episode we talk about AI Pentesting. We talk about the difference with traditional cybersecurity pentesting. We also talk about benefits and drawbacks of manual and AI automatic pentesting. In the case of AI automatic pentesting, we mention some open source tools to perform it. These are some URLs related to topic mentioned in the episode: Breaking Down Adversarial Machine Learning Attacks Through Red Team Challenges https://boschko.ca/adversarial-ml/ Dreadnode’s Crucible CTF platform https://crucible.dreadnode.io/ PyRIT https://github.com/Azure/PyRIT Garak https://github.com/NVIDIA/garak Project Moonshot https://github.com/aiverify-foundation/moonshot
In this episode, we talk about ten very important security architecture patterns to protect LLM applications. Open source guardrails software mentioned during the episode: TrustyAI Llama Guard Nemo Guardrails Open source model evaluation frameworks mentioned: lm-evaluation-harness Project Moonshot Giskard
Prompt injection

Prompt injection

2024-12-0219:17

In today's podcast, we will talk about what is prompt injection. We will talk about techniques to exploit it and security controls to reduce the risk of it happening.
Presentation

Presentation

2024-11-0421:40

In this first episode of Mind the Machine I introduce the podcast and myself, Florencio Cano. The podcast will be about AI security and safety. We will talk about security for AI and also about AI for security. I hope you enjoy it! Please, don't hesitate on contacting me directly by sending me an email to florencio.cano@gmail.com or by contacting me at LinkedIn or Mastodon.