Discover
Human-Centered Security
Human-Centered Security
Author: Voice+Code
Subscribed: 3Played: 2Subscribe
Share
© 2020 Voice+Code
Description
Cybersecurity is complex. Its user experience doesn’t have to be. Heidi Trost interviews information security experts about how we can make it easier for people—and their organizations—to stay secure.
59 Episodes
Reverse
In this episode, Mike Kosak explains what threat intelligence really is (Mike’s former boss said you have to “rub some thinking on it.”), how to define priority intelligence requirements (PIRs), how to treat model, where to find threat intel, and how to keep in actionable with tight feedback loops—not panic. Key takeaways:Threat intel ≠ data. It’s analyzed info focused “walls-out” (what’s outside your org), then shared clearly so people can act.Start with PIRs. Ask: What are we protecting? What is most valuable to our company? What might threat actors want? How do they operate? What do we need to know to defend? Do this with a broad set of stakeholders, not just the security team.Communicate clearly and with context. Intelligence is only valuable if it’s shared in a way others can understand and act on. Avoid overwhelming people with raw data or inducing panic — provide actionable insights that are right-sized for the audience. Mike’s advice: “As a threat intelligence analyst, if you’re doing your job right, when somebody hears from you they know they need to act on it. You don’t want to be the chicken little where you make everybody freak out about everything.”Start small and iterate. Even if you’re a one-person team, you can make a big impact. Use free resources (like MITRE ATT&CK, open-source feeds, or even vendor reports), summarize what’s relevant, and push that out. Then refine based on feedback—treat it as a continuous cycle, not a one-and-done project. Mike admits, “I always say it’s like painting the Golden Gate Bridge. As soon as you get done, you gotta start back at the other end. That’s basically what it is.”Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including “Setting Up a Threat Intelligence Program From Scratch.” https://blog.lastpass.com/posts/setting-up-a-threat-intelligence-program-from-scratch-in-plain-language
You click on a link in an email—as one does. Suddenly you see a message from your organization, “You’ve been phished! Now you need some training!” What do you do next? If you’re like most busy humans, you skip it and move on.Researcher Ariana Mirian (and co-authors Grant Ho, Elisa Luo, Khang Tong, Euyhyun Lee, Lin Liu, Christopher A. Longhurst, Christian Dameff, Stefan Savage, Geoffrey M. Voelker) uncovered similar results in their study “Understanding the Efficacy of Phishing Training in Practice.” The solution? Ariana suggests focusing on a more effective fix: designing safer systems.In the episode we talk about:Annual cybersecurity awareness training doesn’t reduce the likelihood of clicking on phishing links, even if completed recently. Employees who finished training recently show similar phishing failure rates to those who completed it months ago. The study notes, “Employees who recently completed such training, which has significant focus on social engineering and phishing defenses, have similar phishing failure rates compared to other employees who completed awareness training many months ago.”Phishing simulations combined with training (where companies send out fake phishing emails to employees and, for those who click on the links, lead those employees through training) had little impact on whether participants would click phishing links in the future. Ariana was hopeful about interactive training but found that too few participants engaged with it to draw meaningful conclusions. The type of phishing lure (e.g., password reset vs. vacation policy change) influenced whether users clicked. Ariana warned that certain lures could artificially lower click rates.Ultimately, Ariana suggests focusing on designing safer systems—where the burden is taken off the end users. She recommends two-factor authentication, using phishing-resistant hardware keys (like YubiKeys), and blocking phishing emails before they reach users.This quote from the study stood out to me: “Our results suggest that organizations like ours should not expect training, as commonly deployed today, to substantially protect against phishing attacks—the magnitude of protection afforded is simply too small and employees remain susceptible even after repeated training.”This highlights the need for safer system design, especially for critical services like email, which—and this is important—inherently relies on users clicking links.Ariana Mirian is a senior security researcher at Censys. She completed her PhD at UC San Diego and co-authored the paper, “Understanding the Efficacy of Phishing Training in Practice.”G. Ho et al., "Understanding the Efficacy of Phishing Training in Practice," in 2025 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, 2025, pp. 37-54, doi: 10.1109/SP61157.2025.00076.
In this episode, I speak with three guests from diverse backgrounds who share a common goal: Building trust in human-AI partnerships in security. We originally came together for a panel at the Institute of Electrical and Electronics Engineers (IEEE) Conference on AI in May 2025, and this episode recaps that discussion.Key takeaways:Security practitioners tend to be natural-born skeptics (can you blame them?!). They struggle to trust and adopt AI-powered security products, especially in higher-risk scenarios with overly simplified decision-making processes.AI can be a tool for threat actors and a threat vector itself, and its non-deterministic nature makes it unpredictable and vulnerable to manipulation.All AI models are biased, but not all bias is negative. Recognized and carefully managed bias can provide actionable insights. Purposefully biased (opinionated) models should be transparent.Clearer standards and expectations are needed for “human-in-the-loop” and human oversight. What does the human actually do, are they qualified, and do they have the right experience and information?What happens when today’s graduates are tomorrow’s security practitioners? On one end of the spectrum we have a lot of skepticism, on the other end not enough. We talk about over-reliance on AI, de-skilling, and loss of situational awareness.Dr. Margaret Cunningham is the Technical Director, Security & AI Strategy at Darktrace. Margaret was formerly Principal Product Manager at Forcepoint and Senior Staff Behavioral Engineer at Robinhood.Dr. Divya Ramjee is an Assistant Professor at Rochester Institute of Technology (RIT). She also leads RIT’s Technology and Policy Lab, analyzing security, AI policy, and privacy challenges. She previously held senior roles in US government across various agencies.Dr. Matthew Canham is the Executive Director, Cognitive Security Institute. He is a former FBI Supervisory Special Agent, with over twenty years of research in cognitive security.
You're a founder with a great cybersecurity product—but no one knows or cares. Or you're a marketer drowning in jargon (hey, customers hate acronyms, too), trying to figure out what works and what doesn’t. Gianna Whitver, co-founder of the Cybersecurity Marketing Society, breaks down what the cybersecurity industry is getting wrong—and right—about marketing.In this episode, we talk about:Cyber marketing is hard (but you knew that already). It requires deep product knowledge, empathy for stressed buyers, and clear, no-FUD messaging.Building authentic, value-driven communities leads to stronger cybersecurity marketing impact.Don’t copy the marketing strategies of big enterprises. Instead, focus on clarity, founder stories, and product-market fit.Founder-led marketing works. Early-stage founders can break through noise by sharing personal stories.Think twice before listening to the advice of “influencer” marketers. This advice is often overly generic. Or, you’re following advice of marketers marketing to marketers (try saying that ten times fast). In other words, their advice is probably not going to apply to cybersecurity.Gianna Whitver is the co-founder and CEO of the Cybersecurity Marketing Society, a community for marketers in cybersecurity to connect and share insights. She is also the podcast co-host of Breaking Through in Cybersecurity Marketing podcast, and founder of LeaseHoney, a place for beekeepers to find land.
Users, threat actors, and the system design all influence—and are influenced by—one another. To design safer systems, we first need to understand the players who operate within those systems. Kelly Shortridge and Josiah Dykstra exemplify this human-centered approach in their work. In this episode we talk about:The vital role of human factors in cyber-resilience—how Josiah and Kelly apply a behavioral-economics mindset every day to design safer, more adaptable systems.Key cognitive biases that undermine incident response (like action bias and opportunity costs) and simple heuristics to counter them.The “sludge” strategy: deliberately introducing friction to attacker workflows to increase time, effort, and financial costs—as Kelly says, “disrupt their economics.”Why moving from a security culture of shame and blame to one of open learning and continuous improvement is essential for true cybersecurity resilience.Kelly Shortridge is VP, Security Products at Fastly, formerly VP of Product Management and Product Strategy at Capsule8. She is the author of Security Chaos Engineering: Sustaining Resilience in Software and Systems.Josiah Dykstra is the owner of Designer Security, human-centered security advocate, cybersecurity researcher, and former Director of Strategic Initiatives at Trail of Bits. He also worked at the NSA as Technical Director, Critical Networks and Systems. Josiah is the author of Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls that Derail Us.During this episode, we reference:Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Sludge for Good: Slowing and Imposing Costs on Cyber Attackers,” arXiv preprint arXiv:2211.16626 (2022).Josiah Dykstra, Kelly Shortridge, Jamie Met, Douglas Hough, “Opportunity Cost of Action Bias in Cybersecurity Incident Response,” Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 66, Issue 1 (2022): 1116-1120.
Imagine a world where product teams collaborate with security teams. Where product designers can shadow their security peers. A place where security team members believe communication is one of the most important skillsets they have. These are key attributes of human-centered security—the type of dynamics Jordan Girman and Mike Kosak are fostering at Lastpass.In this episode, we talk about:What cross-disciplinary collaboration looks like at Lastpass (for example, a product designer is shadowing the security team).A set of principles for designing for usable security and privacy.Why intentional friction might be counterintuitive to designers but, used carefully, is critical to designing for security.When it comes to improving security outcomes, the words you use matter. Mike explains how the Lastpass Threat Intelligence team thinks about communicating what they learn to a variety of audiences.How to build a threat intelligence program within your organization--even if you have limited resources.Jordan Girman is the VP of User Experience at Lastpass. Mike Kosak is the Senior Principal Intelligence Analyst at Lastpass. Mike references a series of articles he wrote, including “Setting Up a Threat Intelligence Program From Scratch.”
Where are security tools failing security teams? What are security teams looking for when they visit a security vendor marketing website? Paul Robinson, security expert and founder of Tempus Network, says, “Over-promising and under-delivering is a major factor in these tools. The tool can look great in a demo—proof of concepts are great, but often the security vendor is just putting their best foot forward. It's not really the reality of the situation.”Paul’s advice for how can security vendors do better? Start by admitting security isn’t just a switch you flip—it’s a journey. Security teams aren’t fooled by glitz and glamour on your marketing website. They want to see how you addressed real problems.Incredible customer service can make a small, scrappy cybersecurity product stand out from larger, slower-moving vendors.Cybersecurity vendors need to get onboarding right (it’s a make or break aspect of the user experience). There are more variables than you think—not only technology but also getting buy-in from employees, leadership, and other stakeholders.Think about the user experience not only of the person using the security product, but the people at the organization who will be impacted by the product.Looking for a cybersecurity-related movie that is just a tad too plausible? Paul recommends Leave the World Behind on Netflix.
When we collaborate with people, we build trust over time. In many ways, this relationship building is similar to how we work with tools that leverage AI. As usable security and privacy researcher Neele Roch found, “on the one hand, when you ask the [security] experts directly, they are very rational and they explain that AI is a tool. AI is based on algorithms and it's mathematical. And while that is true, when you ask them about how they're building trust or how they're granting autonomy and how that changes over time, they have this really strong anthropomorphization of AI. They describe the trust building relationship as if it were, for example, a new employee.” Neele is a doctoral student at the Professorship for Security, Privacy and Society at ETH Zurich. Neele (and co-authors Hannah Sievers, Lorin Schöni, and Verena Zimmermann) recently published a paper, “Navigating Autonomy: Unveiling Security Experts’ Perspective on Augmented Intelligence and Cybersecurity,” presented at the 2024 Symposium on Usable Privacy and Security. In this episode, we talk to Neele about:How security experts’ risk–benefit assessments drive the level of AI autonomy they’re comfortable with.How experts initially view AI: the tension between AI-as-tool vs. AI-as-“teammate.”The importance of recalibrating trust after AI errors—and how good system design can help users recover from errors without losing their trust in it.Ensuring AI-driven cybersecurity tools provide just the right amount of transparency and control.Why enabling security practitioners to identify, correct, and learn from AI errors is critical for sustained engagement.Roch, Neele, Hannah Sievers, Lorin Schöni, and Verena Zimmermann. "Navigating Autonomy: Unveiling Security Experts' Perspectives on Augmented Intelligence in Cybersecurity." In Twentieth Symposium on Usable Privacy and Security (SOUPS 2024), pp. 41-60. 2024.
In this episode, Heidi gets a taste of her own medicine and is interviewed by co-host John Robertson about her newly-released book Human-Centered Security: How to Design Systems That Are Both Safe and Usable. We talk about:Why Heidi’s experience as a UX researcher prompted her to write Human-Centered Security.Places in the user journey where security impacts users the most.Why cross-disciplinary collaboration is important—find your security UX allies (people in security, legal, privacy, engineering, product managers, to name a few).Practical security UX tips like secure by default, guiding the user along the safe path, and being really careful about the words you use.Technical users—IT admins, engineers, security analysts—are users, too and why it’s so important to thoughtfully design the security user experience for them. (Spoiler: they help keep the rest of us safe!)
The cybersecurity industry often fixates on “behavior change,” expecting users to take on unrealistic tasks instead of designing safer, smarter systems. Matt Wallaert (founder of BeSci.io and author of Start at the End: How to Build Products that Create Change) explains behavioral science isn't about forcing behavior change. Instead, it's about understanding people so a thoughtfully-designed system can influence more secure outcomes.Whether you’re a UX designer, a security engineer, or a CISO, you influence security behaviors. Here’s how you can move towards more secure outcomes:Stay Ahead of Threat Actors: Cybercriminals use behavioral science to their advantage. People designing the security user experience must not only catch up but outpace them.Define Clear Outcomes: Don’t just say “we want users to be secure.” Know exactly what behaviors you want and why. Vague goals lead to vague results.(as Matt explains, saying things like “I want people to be more secure” isn’t helpful. In fact, many people don’t know what “more secure” means in the context of their product or organization).Ask Better Questions: Use tools like the “sufficiency test.” For example, sure, it might be nice if users created complex passwords—but users don’t necessarily have to be the ones doing it. Why can’t the system create a complex password for them (as password managers do)?Understand promoting and inhibiting pressures. These concepts will help you design systems that are more resilient because they are built with people in mind. There are reasons people do and do not do things—when you understand why, you can develop systems that will be more effective in encouraging the behaviors you want. Security practitioners: tired of being perceived as the “department of no”? Matt explains how behavioral science can help you better collaborate with cross-disciplinary teams.Bonus: UX designers, after this episode you may never create another persona.
“Technical people need to better understand the laws and regulations and lawyers need to better understand the technology and processes in place. When that happens, when those worlds come together, that’s where you can meaningfully make things happen.” -Justine Phillips, Partner at Baker McKenzieIn this episode, we talk about:Essential questions product teams should ask legal experts when integrating AI into new products and features.In particular, why it’s important for designers and engineers to question the source of the data they are using for AI-powered products and features.The need to anticipate international security and privacy regulations, which are constantly changing, including emerging regulations that could impact companies developing IoT devices.Justine Phillips is a Partner at Baker McKenzie, where she is co-chair of data+cyber for the Americas. She is the author of Data Privacy Program Guide: How to Build a Privacy Program That Inspires Trust.
What do CISOs have to say about the security tools their teams use?:“When we introduce a level of complexity in the system, it undermines security. Every moment wasted trying to use a tool effectively benefits the adversary.” - Matt StamperIn this episode, we talk to cybsecurity leaders Bill Bonney, Gary Hayslip, and Matt Stamper about:The ever-evolving role of the CISO and what CISOs care about most.What product teams designing security software need to understand:Security tools need to operate across varied ecosystems (which means your product team needs to understand those ecosystems).Complexity is the enemy of security. Yes, UX matters.Context-switching means security teams waste time. Instead, security tools need to present the right information at the right time.Why CISOs are excited to leverage AI in security tools—and what concerns them the most.Bill Bonney, Gary Hayslip, and Matt Stamper are seasoned CISOs and cybersecurity leaders. They are co-founders of the CISO Desk Reference Guide—a series of books including topics such as security policy, third-party risk, privacy, and incident response—which provide actionable insights for security leaders.
In this episode, we talk about: Security tools don’t get a free pass when it comes to involving end users as part of the design process. People studying and building ML-based security tools make a lot of assumptions. Instead of wasting time on assumptions, why not learn from security practitioners directly?Businesses (and academia) are investing a great deal in building ML-based security tools. But are those tools actually useful? Are they introducing problems you didn’t anticipate? And even if they are useful, how do you know security practitioners will adopt them?Why are adversarial machine learning defenses outlined in academic research not being put into practice? Jaron outlines three places where there are significant roadblocks: First, there are barriers to developers being aware of these defenses in the first place. Second, developers need to understand how the threats impact their systems. And third, they need to know how to effectively implement the defenses (and, importantly, be incentivized to do so).Jaron Mink is an Assistant Professor in the School of Computing and Augmented Intelligence at Arizona State University focused on the intersection of usable security, machine learning, and system security. In this episode, we highlight two of Jaron’s papers:“Everybody’s Got ML, Tell Me What Else Do You Have”: Practitioners’ Perception of ML-Based Security Tools and Explanations.”“Security is not my field, I’m a stats guy”: A Qualitative Root Cause Analysis of Barriers to Adversarial Machine Learning Defenses in Industry
In this episode, we talk about:The role misaligned incentives play in security behaviors.How Serge and his team approach security-focused UX research. Looking upstream at the security decisions made by software engineers and, in turn, the situations they are often placed in due to resource constraints and competing priorities at their organizations.Learning from other industries with highly-skilled professionals (shout-out to the humble check list!)Regulations and policy changes will likely place greater liability on the organizations shipping software.Serge Egelman is the Founder and Chief Scientist at AppCensus and Research Director at International Computer Science Institute (ICSI). He’s written countless research papers on usable security and privacy. Most recently, his research centers around improving the user experience for users who are responsible for safeguarding their customer’s data (such as software engineers).
Shante Perrin, a cybersecurity leader, and her team use cybersecurity software to not only to detect and respond to cybersecurity threats but also, as Shante describes, to help paint a picture for their customers:“We like to build a timeline of events to build that picture, create that story so we can deliver it to the customer and explain why we felt it is suspicious. In other words, why are we bothering you about this?”In this episode, we talk about:Building stories from data: analysts must translate technical information into clear, understandable narratives for customers.If people designing cybersecurity software can design better, more effective experiences for analysts, analysts can do a better job of communicating these narratives to their customers.How security analysts at different levels perceive and handle threats differently—and how that changes what they need or expect from cybersecurity software.How thinking like an attacker can help security analysts—but only if the tools they use provide them with the right information at the right time. Shante Perrin is a cybersecurity leader and is currently the director of a managed services team. She led a cybersecurity team for a Fortune 100 company as an MSSP and has been a security analyst and security operations center (SOC) lead.
In this episode, we talk about: The need for human-centered security—in order for security measures to be effective, they must center around people, making usability as crucial as technology. We explore the gap between research and practice, highlighting the need to bring cybersecurity research into real-world application. Human-centered security research can’t possible be effective if no one knows about it or finds it challenging to implement.The importance of collaboration, advocating for more shared spaces where researchers and practitioners can come together to address pressing cybersecurity challenges.Julie Haney is a Computer Scientist and Human-Centered Security Researcher and program lead at NIST (National Institute of Standards and Technology). She was formerly a Computer Scientist at the United States Department of Defense. In the episode we refer to two of Julie’s publications: “From Ivory Tower to Real World: Building Bridges Between Research and Practice in Human-Centered Cybersecurity” and “Towards Bridging the Research-Practice Gap: Understanding Researcher-Practitioner Interactions and Challenges in Human-Centered Cybersecurity.”
Security analysts respond to security detections and alerts. As part of this, they have to sift through a mountain of data and they have to do it fast. Not in hours, not in days. In minutes.Tom Harrison, security operations manager at Secureworks, explains it perfectly, “We have a time crunch and it’s exacerbated by the other big issue security analysts have: we have an absolute ton of data that we have to sift through.”In this episode:Tom explains that security analysts are forced to go back to a pile of data with each subsequent question in their workflow. That’s a huge waste of time. And a terrible user experience. Tom says, “It would lead to better accuracy, faster triage, and a better user experience if you can just take me directly to the answer or at the very least a subsection that has the answer I’m looking for.”What does this mean for you as a UX designer designing security products? You need a deep understanding of security analyst workflows to help them identify and respond to attacks as quickly as possible.That way, you can design security products that support users who are under intense pressure to do things quickly. Tom describes how the UX can “guide or complement the workflow.”Tom talks about what gets him excited about integrating AI into security analyst workflows—and what has him worried, as well.Tom Harrison is a Security Operations Manager at Secureworks. We dubbed Tom an “ideas machine” and a fierce advocate for the security analyst user experience. In fact, Tom is conducting UX research in the field better than most UX researchers. He’s a passionate teacher and shares his knowledge and resources in a free security reference guide.
“Even though usability and security tradeoffs will always be with us, we can get much smarter. Some of the techniques are really simple. For one, write everything down a user needs to do in order to use your app securely. Yeah, keep writing.”In this episode, we talk about:What is threat modeling and why should product teams and UX designers care about it? (Also check out Adam’s first episode on Human-Centered Security).Focus on parts of the user journey where you might gain or lose customers: what tradeoffs between usability and security are you making here?Involve a cross-disciplinary team from the very beginning. This is critiical: “How do we get focused on the parts of the problem that matter so we don’t spend forever on the wrong stuff?”Adam Shostack is an expert on threat modeling, having worked at Microsoft and currently running security consultancy Shostack + Associates. He is the author of The New School of Information Security, Threat Modeling: Designing for Security and Threats: What Every Engineer Should Learn From Star Wars. Adam’s YouTube channel has entertaining videos that are also excellent resources for learning about threat modeling.
“UX design can enhance the overall performance, adoption, and impact in cybersecurity tools that leverage AI, making the tools more accessible to a broader range of users, including those who don’t have deep technical or security knowledge.”In this episode, Siddharth Hirwani and John Robertson talk about:Pressures and challenges security analysts face and how AI can help.Moving beyond AI hype and focusing on integrating AI in a way that genuinely addresses security analyst’s needs.How UX design can foster trust and adoption of AI tools, while still encouraging analysts to verify AI outputs. John and Siddharth highlight problems like over-reliance and bias and how UX can be leveraged to address these concerns.Siddharth Hirwani is Senior Principal Product Designer interested in exploring the critical intersection of user experience and cybersecurity.John Robertson is a researcher interested in the experience of technical users, especially those in cybersecurity. Recently his focus has been understanding workflows of cybersecurity analysts in security operations centers.Siddharth and John will be presenting their paper “Cybersecurity Analyst’s Perception of AI Security Tools and Practical Implications” at USENIX SOUPS (Symposium on Usable Privacy and Security) in August 2024.
“People try to talk about the technical user experience at too high of a level. You talk about alert fatigue and you kind of understand what alert fatigue is just by the name. Yeah, there’s a lot of alerts. But watching it in action is different.”In this episode, Heidi interviews John about what he’s learned about designing for security analysts. We talk about:The importance of understanding user workflows. “Alert fatigue” is just a saying until you actually observe it in action.While trust is hard to measure, it’s critical for improving the security user experience.Practical tips on how to promote cross-disciplinary collaboration.John Robertson is a researcher interested in the experience of technical users, especially those in cybersecurity. Recently his focus has been understanding workflows of Cybersecurity Analysts in Security Operations Centers.


