Discover
Deploy Securely
23 Episodes
Reverse
I recently spoke with Bezawit Sumner, CISO of CRISP Shared Services about: How to address stakeholder concerns when rolling out AI in sensitive spaces like healthcareIssues related to scoping and definitions being non-trivial for AI governanceThe evolving AI regulatory landscape and what companies can do to adapt
I was excited to host Rick Doten, a powerhouse in cybersecurity, to discuss: Key insights from his time as CISO at healthcare giant CenteneThe ethical nuances of AI governance in the spaceHis experiences advising venture capital firms and cybersecurity startupsI was excited to host Rick Doten, a powerhouse in cybersecurity, to discuss: - Key insights from his time as CISO at healthcare giant Centene - The ethical nuances of AI governance in the space - His experiences advising venture capital...
NVIDIA blog on killswitches: https://blogs.nvidia.com/blog/no-backdoors-no-kill-switches-no-spyware/ Colorado Legislative AI Task Force Report: https://leg.colorado.gov/sites/default/files/images/report_and_recommendations-accessible_1_0.pdf SB-205 opposition: https://gazette.com/government/colorado-mayors-oppose-ai-regulation-law/article_0abe652f-a60a-583e-a138-e73fa45e9a03.html AI Stethoscope: https://www.imperial.ac.uk/news/249316/ai-stethoscope-rolled-100-gp-clinics/ ChatGPT Con...
Federal AI action plan: https://www.ai.gov/action-plan Tool-squatting attack paper: https://arxiv.org/pdf/2504.19951 Burning Glass Institute report: https://static1.squarespace.com/static/6197797102be715f55c0e0a1/t/6889055d25352c5b3f28c202/1753810269213/No+Country+for+Young+Grads+V_Final7.29.25+%281%29.pdf AIUC: https://aiuc.com
Walter kicks off a recurring series with Steve Dufour, talking about: - Trump's "Big Beautiful Bill" moving through the Senate and how a key AI-related provision was just removed. - Some key court decisions related to generative AI training on copyrighted material - ISO/IEC 42005:2025, which gives guidance on AI impact assessments - Ways to (avoid) automating yourself out of a job
The basics of healthcare can often be a nightmare: - Finding the right doctor - Setting up and appointment - Getting simple questions answered While these things might seem like an inconvenience, on the grand scale they cost a lot - of money, and unfortunately, lives. That’s why the Embold Virtual Assistant (EVA) is such a breakthrough. A generative AI-powered chatbot with access to up-to-date doctor listings and performance ratings, it’s literally a lifesaver. StackAware was honored to ...
On this episode of the Deploy Securely podcast, I spoke with Kenny Scott, Founder and CEO of Paramify. Paramify gets companies ready for the U.S. government's Federal Risk and Authorization Management Program (FedRAMP). And in this conversation, we talked about: - Paramify "walking the walk" by getting FedRAMP High authorized - How AI is impacting FedRAMP authorizations - The future of AI regulation
I was thrilled to have a leading voice on AI governance and assurance on the Deploy Securely podcast: Patrick Sullivan. Patrick is the Vice President of Strategy and Innovation at A-LIGN, a cybersecurity assurance firm. He’s an expert on the intersection of AI and compliance, regularly sharing expert insights about ISO 42001, the EU AI Act, and their interplay with existing regulations and best practices. We chatted about what he's seen from his customer base when it comes to AI-related: -...
I spoke with Matt Adams, Head of Security Enablement at Citi, about: - The EU AI Act and other laws and regulations impacting AI governance and security - What financial services organizations can do to secure their AI deployments - Some of the biggest myths and misconceptions when it comes to AI governance
While using AI securely is a key concern (especially for companies like StackAware), on the flipside, AI has been supercharging security and compliance teams. Especially when tackling mundane tasks like security questionnaires, AI can accelerate sales and build trust. I chatted with Chas Ballew, CEO of Conveyor, about: - How AI can help with customer security reviews - What sort of controls Conveyor has in place - What Chas thinks the future will look like - The regulatory landscape for AI...
Drive sales, improve customer trust, and avoid regulatory penalties with the NIST AI RMF, EU AI Act, and ISO 42001. Check out the full post on the Deploy Securely blog: https://blog.stackaware.com/p/eu-ai-act-nist-rmf-iso-42001-picking-frameworks
No sector is more in need of effective, well-governed AI than healthcare. The United States spends vastly more per person than any other nation, yet is in the middle of the pack when it comes to life expectancy. That’s why I was so excited to work with Embold Health to measure and manage their AI-related cybersecurity, compliance, and privacy risk. Recently I had the pleasure of speaking with their Chief Security and Privacy Officer, Steve Dufour, and Vice President of Engineering, Mark Black...
1) Early-stage AI startups often grapple with customer security reviews, making certifications like SOC 2 or ISO 27001 essential. However, ISO 42001 might be more suitable for AI-focused companies due to its comprehensive coverage. 2) Larger corporations using AI to manage sensitive data face scrutiny and criticism. These companies can validate their AI practices through ISO 42001, offering a certified risk management system that reassures stakeholders 3) In heavily-regulated sectors like h...
Here are the top 3 things I'm seeing: 1️⃣ Auditors don’t (yet) have strong opinions on how to deploy AI securely 2️⃣ Enforcement is here, just not evenly distributed. 3️⃣ Integrating AI-specific requirements with existing security, privacy, and compliance ones isn’t going to be easy Want to see a full post? Check out the Deploy Securely blog: https://blog.stackaware.com/p/ai-governance-compliance-auditors-enforcement
Someone asked me what the unintended training and data retention risk with Meta's code Llama is. My answer: the same as every other model you host and operate on your own. And, all other things being equal, it's lower than that of anything operating -as-a-Service (-aaS) like ChatGPT or Claude. Check out this video for deeper dive? Or read the full post on Deploy Securely: https://blog.stackaware.com/p/code-llama-self-hosted-model-unintended-training Want more AI security resources? Chec...
So you have your AI policy in place and are carefully controlling access to new apps as they launch, but then... ...you realize your already-approved tools are themselves starting to leverage 4th party AI vendors. Welcome to the modern digital economy. Things are complex and getting even more so. That's why you need to incorporate 4th party risk into your security policies, procedures, and overall AI governance program. Check out the full post with the Asana and Databricks examples I men...
I’m worried about data leakage from LLMs, but probably not why you think. While unintended training is a real risk that can’t be ignored, something else is going to be a much more serious problem: sensitive data generation (SDG). A recent paper (https://arxiv.org/pdf/2310.07298v1.pdf) shows how LLMs can infer huge amounts of personal information from seemingly innocuous comments on Reddit. And this phenomenon will have huge impacts for: - Material nonpublic information - Executive moves -...
What does "security" even mean with AI? You'll need to define things like: BUSINESS REQUIREMENTS - What type of output is expected? - What format should it be? - What is the use case? SECURITY REQUIREMENTS - Who is allowed to see which outputs? - Under which conditions? Having these things spelled out is a hard requirement before you can start talking about the risk of a given AI model. Continuing the build-out of the Artificial Intelligence Risk Scoring System (AIRSS), I tackle these ...
AI cyber risk management needs a new paradigm. Logging CVEs and using CVSS just does not make sense for AI models, and won't cut it going forward. That's why I launched the Artificial Intelligence Risk Scoring System (AIRSS). A quantitative approach to measuring cybersecurity risk from artificial intelligence systems, I am building it in public to help refine and improve the approach. Check out the first post in a series where I lay out my methodology: https://blog.stackaware.com/p/artifi...



