Discover
AI Security Ops
AI Security Ops
Author: Black Hills Information Security
Subscribed: 0Played: 0Subscribe
Share
© 2025 Black Hills Information Security
Description
Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).
35 Episodes
Reverse
Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security. https://discord.gg/bhisAI Security Ops | Episode 34 – Why Did We Create This Podcast?In this episode, the BHIS team explains the purpose behind AI Security Ops, what you can expect from future episodes, and why this show matters for anyone at the intersection of AI and cybersecurity.Chapters(00:00) - Intro & Welcome
(00:13) - Why We Started AI Security Ops
(00:41) - Our Mission: Stay Informed & Ahead
(00:56) - What We Cover: AI News & Insights
(01:23) - Community Q&A & Real-World Scenarios
(02:18) - Special Guests & Industry Leaders
(02:41) - Demos, How-Tos & Practical Tips
(03:07) - Who Should Listen & Why Subscribe
(03:34) - Join the Conversation & Closing
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comBrought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com
Community Q&A on AI Security | Episode 34In this episode of BHIS Presents: AI Security Ops, our panel tackles real questions from the community about AI, hallucinations, privacy, and practical use cases. From limiting model hallucinations to understanding memory features and explaining AI to non-technical audiences, we dive into the nuances of large language models and their role in cybersecurity.We break down:Why LLMs sometimes “make stuff up” and how to reduce hallucinationsThe role of prompts, temperature, and RAG databases in accuracyPrompting best practices and reasoning modes for better resultsLegal liability: Can you sue ChatGPT for bad advice?Memory features, data retention, and privacy trade-offsSecurity paranoia: AI apps, trust, and enterprise vs free accountsPractical examples like customizing AI for writing styleHow to explain AI to your mom (or any non-technical audience)Why AI isn’t magic—just math and advanced auto-completeWhether you’re deploying AI tools or just curious about the hype, this episode will help you understand the realities of AI in security and how to use it responsibly.Chapters(00:00) - Welcome & Sponsor Shoutouts
(00:50) - Episode Overview: Community Q&A
(01:19) - Q1: Will ChatGPT Make Stuff Up?
(07:50) - Q2: Can Lawyers Sue ChatGPT for False Cases?
(11:15) - Q3: How Can AI Improve Without Ingesting Everything?
(22:04) - Q4: How Do You Explain AI to Non-Technical People?
(28:00) - Closing Remarks & Training Plug
Brought to you by:Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/Active Countermeasureshttps://www.activecountermeasures.comWild West Hackin Festhttps://wildwesthackinfest.com🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comAI News | Episode 33In this episode of BHIS Presents: AI Security Ops, the panel dives into the latest developments shaping the AI security landscape. From the first documented AI-orchestrated cyber-espionage campaign to polymorphic malware powered by Gemini, we explore how agentic AI, insecure infrastructure, and old-school mistakes are creating a fragile new attack surface.We break down:AI-driven cyber espionage: Anthropic disrupts a state-sponsored campaign using autonomous Black-hat LLMs: KawaiiGPT democratizes offensive capabilities for script kiddies.Critical RCEs in AI stacks: ShadowMQ vulnerabilities hit Meta, NVIDIA, Microsoft, and more.Amazon’s private AI bug bounty: Nova models under the microscope.Google Antigravity IDE popped in 24 hours: Persistent code execution flaw.PROMPTFLUX malware: Polymorphic VBScript leveraging Gemini for hourly rewrites.Whether you’re defending enterprise AI deployments or building secure agentic tools, this episode will help you understand the emerging risks and what you can do to stay ahead.⏱️ Chapters(00:00) - Intro & Sponsor Shoutouts
(01:27) - AI-Orchestrated Cyber Espionage (Anthropic)
(08:10) - ShadowMQ: Critical RCE in AI Inference Engines
(09:54) - KawaiiGPT: Free Black-Hat LLM
(22:45) - Amazon Nova: Private AI Bug Bounty
(26:38) - Google Antigravity IDE Hacked in 24 Hours
(31:36) - PROMPTFLUX: Malware Using Gemini for Polymorphism
🔗 LinksAI-Orchestrated Cyber Espionage (Anthropic)ShadowMQ: Critical RCE in AI Inference EnginesKawaiiGPT: Free Black-Hat LLMAmazon Nova: Private AI Bug BountyGoogle Antigravity IDE Hacked in 24 HoursPROMPTFLUX: Malware Using Gemini for Polymorphism#AISecurity #Cybersecurity #BHIS #LLMSecurity #AIThreats #AgenticAI #BugBounty #malwareBrought to you by Black Hills Information Security https://www.blackhillsinfosec.comAntisyphon Traininghttps://www.antisyphontraining.com/----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comModel Evasion Attacks | Episode 32In this episode of BHIS Presents: AI Security Ops, the panel explores the stealthy world of model evasion attacks, where adversaries manipulate inputs to trick AI classifiers into misclassifying malicious activity as benign. From image classifiers to malware detection and even LLM-based systems, learn how attackers exploit decision boundaries and why this matters for cybersecurity.We break down:- What model evasion attacks are and how they differ from data poisoning- How attackers tweak features to bypass classifiers (images, phishing, malware)- Real-world tactics like model extraction and trial-and-error evasion- Why non-determinism in AI models makes evasion harder to predict- Advanced threats: model theft, ablation, and adversarial AI- Defensive strategies: adversarial training, API throttling, and realistic expectations- Future outlook: regulatory trends, transparency, and the ongoing arms raceWhether you’re deploying EDR solutions or fine-tuning AI models, this episode will help you understand why evasion is an enduring challenge, and what you can do to defend against it.#AISecurity #ModelEvasion #Cybersecurity #BHIS #LLMSecurity #aithreatsBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro & Sponsor Shoutouts
(01:19) - What Are Model Evasion Attacks?
(03:58) - Image Classifiers & Pixel Tweaks
(07:01) - Malware Classification & Decision Boundaries
(10:02) - Model Theft & Extraction Attacks
(13:16) - Non-Determinism & Myth Busting
(16:07) - AI in Offensive Capabilities
(17:36) - Defensive Strategies & Adversarial Training
(20:54) - Vendor Questions & Transparency
(23:22) - Future Outlook & Regulatory Trends
(25:54) - Panel Takeaways & Closing Thoughts
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comData Poisoning Attacks | Episode 31In this episode of BHIS Presents: AI Security Ops, the panel dives into the hidden danger of data poisoning – where attackers corrupt the data that trains your AI models, leading to unpredictable and often harmful behavior. From classifiers to LLMs, discover why poisoned data can undermine security, accuracy, and trust in AI systems.We break down:What data poisoning is and why it mattersHow attackers inject malicious samples or flip labels in training setsThe role of open-source repositories like Hugging Face in supply chain riskNew twists for LLMs: poisoning via reinforcement feedback and RAGReal-world concerns like bias in ChatGPT and malicious model uploadsDefensive strategies: governance, provenance, versioning, and security assessmentsWhether you’re building classifiers or fine-tuning LLMs, this episode will help you understand how poisoned data sneaks in, and what you can do to prevent it. Treat your AI like a “drunk intern”: verify everything.#aisecurity #DataPoisoning #Cybersecurity #BHIS #llmsecurity #aithreatsBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro & Sponsor Shoutouts
(01:19) - What Is Data Poisoning?
(03:58) - Poisoning Classifier Models
(08:10) - Risks in Open-Source Data Sets
(12:30) - LLM-Specific Poisoning Vectors
(17:04) - RAG and Context Injection
(21:25) - Realistic Threats & Examples
(25:48) - Defensive Strategies & Governance
(28:27) - Panel Takeaways & Closing Thoughts
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comAI News Stories | Episode 30In this episode of BHIS Presents: AI Security Ops, we break down the top AI cybersecurity news and trends from November 2025. Our panel covers rising public awareness of AI, the security risks of local LLMs, emerging AI-driven threats, and what these developments mean for security teams. Whether you work in cybersecurity, AI security, or incident response, this episode helps you stay ahead of evolving AI-powered attacks and defenses.Topics Covered:Only 5% of Americans are unaware of AI?What Pew Research reveals about AI’s penetration into everyday life and workplace usage.AI’s Shift to the Intimacy Economy – Project Libertyhttps://email.projectliberty.io/ais-shift-to-the-intimacy-economy-1 Amazon to Cut Jobs and Invest in AI Infrastructure14,000 corporate roles eliminated—are layoffs really about efficiency or something else?Amazon to Cut Jobs & Invest in AI – DWhttps://www.dw.com/en/amazon-to-cut-14000-corporate-jobs-amid-ai-investment/a-74524365Local Models Less Secure than Cloud Providers?Why quantization and lack of guardrails make local LLMs more vulnerable to prompt injection and insecure code.Local LLMs Security Paradox – Quesmahttps://quesma.com/blog/local-llms-security-paradox Whether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.Brought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro & Sponsor Shoutouts
(01:07) - AI’s Shift to the Intimacy Economy (Pew Research)
(19:40) - Amazon Layoffs & AI Investment
(27:00) - Local LLM Security Paradox
(36:32) - Wrap-Up & Key Takeaways
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comA Conversation with Dr. Colin Shea-Blymyer | Episode 29In this episode of BHIS Presents: AI Security Ops, the panel welcomes Dr. Colin Shea-Blymyer for a deep dive into the intersection of AI governance, cybersecurity, and red teaming. From the historical roots of neural networks to today’s regulatory patchwork, we explore how policy, security, and innovation collide in the age of AI. Expect candid insights on emerging risks, open models, and why defining your risk appetite matters more than ever.Topics Covered:AI governance vs. innovation: U.S. vs. EU regulatory approachesThe evolution of neural networks and lessons from AI historyAI red teaming: definitions, methodologies, and data-sharing challengesSafety vs. security: where they overlap and divergeEmerging risks: supply chain vulnerabilities, prompt injection, and poisoned dataOpen weights vs. closed models: implications for research and securityPractical takeaways for organizations navigating AI uncertaintyAbout the Panel:Joff Thyer, Dr. Brian Fehrman, Derek BanksGuest Panelist: Dr. Colin Shea-Blymyerhttps://cset.georgetown.edu/staff/colin-shea-blymyer/#aisecurity #aigovernance #cyberrisk #AIredteam #OpenModels #aipolicy #BHIS #AIthreats #aiincybersecurity #llmsecurityBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro & Guest Welcome
(02:14) - Colin’s Journey: From CS to AI Governance
(06:33) - Lessons from AI History & Neural Network Origins
(10:28) - AI Red Teaming: Definitions & Methodologies
(15:11) - Safety vs. Security: Where They Intersect
(22:47) - Regulatory Landscape: U.S. Patchwork vs. EU AI Act
(33:42) - Open Models Debate: Risks & Research Benefits
(38:19) - Emerging Threats & Supply Chain Risks
(44:06) - Practical Takeaways & Closing Thoughts
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comAI News Stories | Episode 28 – Questions from the CommunityIn this episode of BHIS Presents: AI Security Ops, the panel tackles real questions from the community, diving deep into the practical, ethical, and technical challenges of AI in cybersecurity. From red teaming tools to prompt privacy, this Q&A session delivers candid insights and actionable advice for professionals navigating the AI-infused threat landscape.🧠 Topics Covered:Open-source tools for LLM red teamingThreat modeling AI systems (STRIDE methodology)Hallucination rates in frontier vs. local modelsPrompt privacy: what’s stored, what’s sharedShould red teamers disclose AI usage?Human-in-the-loop: AI-generated deliverablesWhether you're a pentester, SOC analyst, or just curious about how AI is reshaping offensive security, this episode is packed with expert perspectives and practical takeaways.About the Panel:Brian Fehrman, Derek Banks, Joff ThyerBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro & Sponsor Shoutouts
(01:14) - Recommended Tools for LLM Red Teaming
(06:12) - Threat Modeling AI Systems
(09:58) - Which Models Hallucinate Most?
(17:13) - Prompt Privacy: What You Should Know
(22:54) - Should Red Teamers Disclose AI Usage?
(27:01) - Final Thoughts & Wrap-Up
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comAzure AI Foundry Guardrails | Episode 27In this episode of BHIS Presents: AI Security Ops, we explore how to configure content filters for AI models using the Azure AI Fooundry guardrails and controls interface. Whether you're building secure demos or deploying models in production, this walkthrough shows how to block unwanted content, enforce policy, and maintain compliance.Topics Covered: Changing default filters for demo compliance Setting up a system prompt and understanding its role Adding regex terms to block specific content Creating and configuring a custom filter: “tech demo guardrails” Input-side filtering: inspecting user text before model access Safety vs. security categories in filtering Enabling prompt shields for indirect jailbreak detectionThis video is ideal for developers, security engineers, and anyone working with AI systems who needs to implement layered defenses and ensure responsible model behavior.Why This MattersBy implementing layered security—block lists, input and output filters—you protect sensitive data, comply with policy, and maintain a safe user experience.#AIsecurity #GuardrailsAndControls #ContentFiltering #PromptSecurity #RegexFiltering #BHIS #AIModelSafety #SystemPromptSecurityBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Introduction & Overview
(01:17) - Changing the Default Content Filter for Demo Compliance
(02:00) - Setting Up a System Prompt and Its Purpose
(04:26) - Adding a New Term (“dogs”) to the Content Filter (Regex Example)
(05:04) - Creating and Configuring a Content Filter Named “Tech Demo Guardrails”
(05:35) - How Input-Side Filters Inspect and Block Unwanted Content
(06:01) - Overview of Safety Categories vs. Security Categories
(07:15) - Enabling Prompt Shields for Indirect Jailbreak Detection (Not Used in Demo)
(08:30) - Summary & Next Steps
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comQuestions from the Community | Episode 26In this community-driven episode of BHIS Presents: AI Security Ops, the panel answers real questions from viewers about AI security, privacy, and risk. Featuring Brian Fehrman, Bronwen Aker, Jack Verrier, and Joff Thyer, the team dives into everything from guardrails and hallucinations to GDPR, agentic AI, and how to stay safe in an AI-saturated world.💬 Topics include:Are guardrails enough to protect sensitive prompts?What’s the difference between hallucination and confabulation?How does AI intersect with GDPR and the right to be forgotten?What does it mean to “stay safe” when using AI?How is securing AI different from traditional software?Whether you're a red teamer, SOC analyst, or just trying to navigate the AI landscape, this episode offers practical insights and thoughtful perspectives from seasoned security professionals.Panelists:🔹 Brian Fehrman🔹 Bronwen Aker🔹 Jack Verrier🔹 Joff Thyer#AIsecurity #Cybersecurity #PromptInjection #LLMs #BHIS #AIprivacy #AgenticAI #AIandGDPRBrought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro & Panel Welcome
(01:22) - Are Guardrails Enough to Protect System Prompts?
(09:54) - Explaining Hallucination vs. Confabulation
(20:09) - AI and GDPR: The Right to Be Forgotten?
(23:49) - How Do We Stay Safe Using AI?
(32:26) - Securing AI vs. Traditional Software
(37:18) - Final Thoughts & Wrap-Up
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comAI News Stories | Episode 25In this episode of BHIS Presents: AI Security Ops, the panel dives into the biggest AI cybersecurity headlines from late September 2025. From government regulation to zero-click exploits, we unpack the risks, trends, and implications for security professionals navigating the AI-powered future.🧠 Topics Covered:Government oversight of advanced AI systemsAccenture’s massive layoffs amid AI pivotShadowLeak: zero-click vulnerability in ChatGPT agentsMalicious MCP server stealing emailsAI in the SOC: benefits and risksAttackers using AI to scale ransomware and social engineeringWhether you're a red teamer, SOC analyst, or just trying to stay ahead of AI threats, this episode delivers sharp insights and practical takeaways.Brought to you by Black Hills Information Security https://www.blackhillsinfosec.com----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro & Sponsor Shoutouts
(00:45) - Senators Introduce AI Risk Evaluation Act
(09:48) - Accenture Layoffs & AI Restructuring
(16:17) - ShadowLeak: Zero-Click Vulnerability in ChatGPT
(20:07) - Malicious MCP Server & Supply Chain Risks
(26:27) - AI in the SOC: Alert Triage & Analyst Burnout
(30:10) - Final Thoughts: AI’s Role in Security Operations
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comModel Extraction Attacks | Episode 24In this solo episode of BHIS Presents: AI Security Ops, Brian Fehrman explores the stealthy world of Model Extraction Attacks—where hackers clone your AI model without ever touching your code. Learn how adversaries can reverse-engineer your multimillion-dollar model simply by querying its API, and why this threat is more than just academic.We break down:- What model extraction is and how it works- Real-world examples like DeepSeek’s alleged distillation of OpenAI models- The risks to intellectual property, security, and sensitive data- Defensive strategies including API throttling, output limiting, watermarking, and honeypots- Legal and ethical questions around benchmarking vs. theftWhether you're deploying LLMs or classification models, this episode will help you understand how attackers replicate model behavior—and what you can do to stop them.If your AI is accessible, someone’s probably trying to copy it.#AIsecurity #ModelExtractionAttacks #Cybersecurity #BHIS #LLMsecurity #AIthreats----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro & Sponsor Shoutouts
(01:19) - What Is a Model Extraction Attack?
(02:45) - Why Training a Model Is So Expensive
(05:42) - How Model Extraction Works
(07:11) - Why It Matters: IP, Security & Data Risks
(10:25) - What Makes Extraction Easier or Harder
(12:54) - Defenses: Monitoring, Watermarking & Privacy
(16:04) - What to Do If You Suspect an Attack
(16:29) - Legal & Ethical Questions Around Model Theft
(19:30) - Final Thoughts & Takeaways
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comIn this episode of AI Security Ops, Brian Fehrman and Joff Thyer dive into the latest AI news of the month, exploring how rapidly evolving technologies are reshaping cybersecurity.Topics covered include: - How AI is changing cybersecurity monitoring - Expanding from email to Slack, Teams, and other chat platforms - Addressing insider threats and phishing campaigns in new channels - The rapid pace of AI innovation and industry trends - Why organizations should prioritize AI security assessments - Real-world risks and opportunities in the AI landscapeStay ahead in the AI race with Black Hills Information Security as we cover real-world risks, opportunities, and the latest developments in the AI landscape.///News Stories This Episode:1. AI-Powered Villager Pen Testing Tool Hits 11,000 PyPI Downloads Amid Abuse Concernshttps://thehackernews.com/2025/09/ai-powered-villager-pen-testing-tool.html2. CrowdStrike and Meta Just Made Evaluating AI Security Tools Easierhttps://www.zdnet.com/article/crowdstrike-and-meta-just-made-evaluating-ai-security-tools-easier/3. Check Point Acquires Lakera to Deliver End-to-End AI Security for Enterpriseshttps://www.checkpoint.com/press-releases/check-point-acquires-lakera-to-deliver-end-to-end-ai-security-for-enterprises/4. Proofpoint Offers AI Agents to Monitor Human-Based Communicationshttps://www.msspalert.com/news/proofpoint-offers-ai-agents-to-monitor-human-based-communications5. EvilAI Malware Campaign Exploits AI-Generated Code to Breach Global Critical Sectorshttps://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comInsider Threat 2.0 - Prompt Leaks & Shadow AI | Episode 22In this episode of BHIS Presents AI Security Ops, we dive into Insider Threat 2.0: Prompt Leaks & Shadow AI. The panel explores the hidden risks of employees pasting sensitive data into public AI tools, the rise of unauthorized “Shadow AI” in organizations, and how policies—or lack thereof—can expose critical information. Learn why free AI services often make you the product, how prompt history creates data leakage risks, and why companies must establish clear AI usage guidelines. We also cover practical defenses, from enterprise AI accounts to cultural awareness training, and draw parallels to past IT challenges like Shadow IT and rogue wireless.If you’re concerned about AI security, data leakage, or safe adoption of large language models, this discussion will help you navigate the risks and protect your organization.#AIsecurity #PromptInjection #ShadowAI #Cybersecurity #BHIS----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comEpisode 21 - Deepfakes And Fraudulent Interviews In Remote HiringIn this episode of AI Security Ops by Black Hills Information Security, the crew explores the alarming rise of deepfakes and fraudulent interviews in remote hiring. As virtual work expands, cybercriminals are using AI-driven impersonation tactics to pose as job candidates, deceive recruiters, and gain unauthorized access to organizations. Joff, Bronwen Aker, Brian Fehrman, and Derek Banks break down real-world cases, explain the challenges of spotting deepfake job scams, and share actionable strategies to secure hiring processes. Discover the red flags to watch for in virtual interviews, how attackers exploit trust, and why companies must adapt their security awareness in the age of AI.----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comEpisode 20 - The Hallucination ProblemIn this episode of AI Security Ops, Joff Thyer and Brian Fehrman from Black Hills Information Security dive into the hallucination problem in AI large language models and generative AI. They explain what hallucinations are, why they happen, and the risks they create in real-world AI deployments. The discussion covers security implications, practical examples, and strategies organizations can use to mitigate these issues through stronger design, monitoring, and testing. A must-watch for cybersecurity professionals, AI researchers, and anyone curious about the limitations and challenges of modern AI systems.----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comAI News of the Month | Episode 19In Episode 19,Brianand Derek cover a zero-click indirect prompt injection attack against ChatGPT connectors and seemingly innocent Google Calendar events that hijack smart homes via Gemini, with possible consequences for the power grid.They'll discuss the impact of Microsoft patching a critical Azure OpenAI SSRF vulnerability and go over new NIST AI security standards, IBM’s study on shadow AI and breach costs, OpenAI’s response to chat indexing leaks, and a malicious VS Code extension that stole $500K in cryptocurrency. #AI #CyberSecurity #PromptInjection #Malware #InfoSec #AIThreats #Hacking #GenerativeAI #Deepfakes #LLM #ShadowAI“Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer) — Aug 6, 2025Primary: https://www.wired.com/story/poisoned-document-could-leak-secret-data-chatgpt/Tech write-up: https://labs.zenity.io/p/agentflayer-chatgpt-connectors-0click-attack-5b41Poisoned Google Calendar invite hijacks Gemini to control a smart home — Aug 6–10, 2025Primary: https://www.wired.com/story/google-gemini-calendar-invite-hijack-smart-home/Bug/patch coverage: https://www.bleepingcomputer.com/news/security/google-calendar-invites-let-researchers-hijack-gemini-to-leak-user-data/Microsoft August Patch Tuesday adds AI-surface fixes; critical Azure OpenAI vuln (CVE-2025-53767) — Aug 12–13, 2025Release coverage: https://www.techradar.com/pro/security/microsofts-latest-major-patch-fixes-a-serious-zero-day-flaw-and-a-host-of-other-issues-so-update-nowCVE entry: https://nvd.nist.gov/vuln/detail/CVE-2025-53767 (NVD)Overview: https://www.tenable.com/blog/microsofts-august-2025-patch-tuesday-addresses-107-cves-cve-2025-53779 (Tenable®)NIST proposes SP 800-53 “Control Overlays for Securing AI Systems” — Aug 14, 2025Announcement: https://www.nist.gov/news-events/news/2025/08/nist-releases-control-overlays-securing-ai-systems-concept-paperConcept paper (PDF): https://csrc.nist.gov/csrc/media/Projects/cosais/documents/NIST-Overlays-SecuringAI-concept-paper.pdfIBM 2025 “Cost of a Data Breach”: AI is both breach vector and defender — Jul 30, 2025Press release: https://newsroom.ibm.com/2025-07-30-ibm-report-13-of-organizations-reported-breaches-of-ai-models-or-applications%2C-97-of-which-reported-lacking-proper-ai-access-controlsReport: https://www.ibm.com/reports/data-breachAnalysis: https://venturebeat.com/security/ibm-shadow-ai-breaches-cost-670k-more-97-of-firms-lack-controls/ (VentureBeat)OpenAI considers encrypting Temporary Chats; privacy clean-ups after search-indexing scare — Aug 18, 2025Interview: https://www.axios.com/2025/08/18/altman-openai-chatgpt-encrypted-chatsContext: https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/Help center (retention): https://help.openai.com/en/articles/8914046-temporary-chat-faqFake VS Code extension for Cursor leads to $500K crypto theft — July 11, 2025Primary: https://www.scworld.com/news/fake-visual-studio-code-extension-for-cursor-led-to-500k-theft SC MediaResearch write-up: https://securelist.com/open-source-package-for-cursor-ai-turned-into-a-crypto-heist/116908/SecurelistCoverage: https://www.bleepingcomputer.com/news/security/malicious-vscode-extension-in-cursor-ide-led-to-500k-crypto-theft/----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro
(00:31) - “Poisoned doc” exfiltrates data via ChatGPT Connectors (AgentFlayer)
(01:15) - A zero-click prompt injection
(02:12) - url_safe bypassed using URLs from Microsoft’s Azure Blob cloud storage
(07:08) - Poisoned Google Calendar invite hijacks Gemini to control a smart home
(08:35) - The intersection of AI and IOT
(09:53) - Be careful what you hook AI up to
(10:23) - Derek warns of threat to power grid
(11:54) - Mitigations - restrict permissions, sanitize calendar content
(13:56) - Patch Tuesday - AI-surface fixes; critical Azure OpenAI vuln
(15:49) - NIST proposes SP 800-53 “Control Overlays for Securing AI Systems”
(18:43) - IBM “Cost of a Data Breach”: AI is both breach vector and defender
(19:16) - Shadow AI
(21:49) - “The AI adoption curve is outpacing controls”
(23:02) - OpenAI considers encrypting Temporary Chats
(26:39) - Data storage and logging LLM interactions
(29:59) - Fake VS Code extension for Cursor leads to $500K crypto theft
(30:37) - Danger of using pip install as root on a server
🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comMalware in the Age of AI | Episode 18In Episode 18, hosts Joff Thyer, Derek Banks and Brian Fehrman discuss the rise of AI-powered malware. From polymorphic keyloggers like Black Mamba to the use of ChatGPT, WormGPT, and fine-tuned LLMs for cyberattacks, the team will explain how generative AI is reshaping the security landscape.They'll break down the real risks vs. hype, including prompt injection, jailbreaking, deepfakes, and AI-driven fraud, while also sharing strategies defenders can use to fight back.The discussion highlights both the ethical implications and the critical need for defense-in-depth as threat actors use AI to accelerate their attacks.#AI #Cybersecurity #Malware #AIThreats #Deepfakes #LLM #InfoSec #AIinSecurity #GenerativeAI #Hacking----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro
(01:15) - Black Mamba polymorphic AI keylogger
(02:47) - Can Chat GPT5 generate malware for us?
(03:42) - Guardrail circumvention technique #1
(04:16) - Guardrail circumvention technique #2
(05:30) - Guardrail circumvention technique #3
(05:59) - Guardrail circumvention technique #4
(06:30) - Using an Abliterated Model
(08:32) - AI models have democratized software creation
(11:20) - Polymorphic keyloggers are not new
(12:03) - AI makes it faster to iterate polymorphic malware
(12:33) - AI is able to analyze source code and find more vulnerabilities
(15:16) - How scared should we be? (hype vs reality)
(16:10) - Knowing enough to ask the right questions is important
(17:41) - Significant risks of AI fraud and social engineering
(19:32) - Business email compromise
(21:10) - How defenders can use AI
(24:28) - Audio deepfakes have become easier to create
(25:06) - Ethical concerns for pentesters using AI
(29:26) - In one sentence, how will AI change malware production in the near future?
Register for FREE Infosec Webcasts, Anti-casts & Summits – https://poweredbybhis.comCommunity Q&A | Episode 17In episode 17 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, Brian Fehrman and Bronwen Aker answer viewer-submitted questions about system prompts, prompt injection risks, AI hallucinations, deep fakes, and when (and when not) to use AI in cybersecurity. They'll discuss the difference between system and user prompts, how temperature settings impact LLM outputs, and the biggest mistakes companies make when deploying AI models. They'll also explain how to reduce hallucinations, and approach AI responsibly in security workflows. Derek explains his method for detecting audio deep fakes.----------------------------------------------------------------------------------------------Joff Thyer - https://blackhillsinfosec.com/team/joff-thyer/Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/
(00:00) - Intro
(01:10) - What is a system prompt? How is it different from a user prompt?
(03:35) - What are some common system prompt mistakes?
(06:54) - Does repeating a prompt give different responses? (non-deterministic)
(07:56) - The temperature knob effect
(12:18) - When should I use AI? When should I not?
(16:47) - What are best practices to reduce hallucinations?
(20:29) - End-user temperature knob work-around
(22:55) - AI bots that rewrite their code to avoid shutdown commands
(26:53) - NCSL.org - Updates on legislation affecting AI
(29:44) - How do we detect AI deep fakes?
(30:00) - Derek’s DeepFake demo video
(30:38) - DISCLAIMER - Do Not use AI deep fakes to break the law!
(31:29) - F5-tts.org - Deep fake website
(35:02) - Derek pranks his family using AI
A Conversation with Daniel MiesslerIn Episode 16, Joff and the team welcome human-centric AI innovator Daniel Miessler, creator of Fabric, an AI framework for solving real-world problems from a human perspective.The conversation covers AI’s role in cybersecurity, the importance of clarity in “intent engineering” over prompt tricks, and the risks and opportunities of deploying large language models. They explore the shift from “vibe coding” to “spec coding,” the rise of AI scaffolding over raw model improvements, and what AI advancements including GPT-5 mean for the future of knowledge work."Introducing Fabric — A Human AI Augmentation Framework"https://www.youtube.com/watch?v=wPEyyigh10gDaniel's GitHub repository:https://github.com/danielmiessler/Fabric#AI #CyberSecurity #AgenticAI #SecurityOps #PromptEngineering























