DiscoverThe Shifting Privacy Left Podcast
The Shifting Privacy Left Podcast

The Shifting Privacy Left Podcast

Author: Debra J. Farber (Shifting Privacy Left)

Subscribed: 4Played: 26
Share

Description

Shifting Privacy Left features lively discussions on the need for organizations to embed privacy by design into the UX/UI, architecture, engineering / DevOps and the overall product development processes BEFORE code or products are ever shipped. Each Tuesday, we publish a new episode that features interviews with privacy engineers, technologists, researchers, ethicists, innovators, market makers, and industry thought leaders. We dive deeply into this subject and unpack the exciting elements of emerging technologies and tech stacks that are driving privacy innovation; strategies and tactics that win trust; privacy pitfalls to avoid; privacy tech issues ripped from the headlines; and other juicy topics of interest. 

46 Episodes
Reverse
My guests this week are Yusra Ahmad, CEO of Acuity Data, and Luke Beckley, Data Protection Officer and Privacy Governance Manager at Correla, who work with The RED (Real Estate Data) Foundation, a sector-wide alliance that enables the real estate sector to benefit from an increased use of data, while voiding some of the risks that this presents, and better serving society.We discuss the current drivers for change within the real estate industry and the complexities of the real estate industry utilizing incredible amounts of data. You’ll learn the types of data protection, privacy, and ethical challenges The RED Foundation seeks to solve, especially now with the advent of new technologies. Yusra and Luke discuss some  ethical questions the real estate sector as it considers leveraging new technology. Yusra and Luke come to the conversation from the knowledgeable perspective as The RED Foundation’s Chair of the Data Ethics Steering Group and Chair of the Engagement and Awareness Group, respectively.Topics Covered:Introducing Luke Beckley (DPO, Privacy & Governance Manager at Correla) and Yusra Ahmed (CEO of Acuity Data); who are here to talk about their data ethics work at The RED FoundationHow the scope, sophistication, & connectivity of data is increasing exponentially in the real estate industryWhy ESG, workplace experience, & smart city development are drivers of data collection; and the need for data ethics reform within the real estate industryDiscussion of types of personal data these real estate companies collect & use across stakeholders: owners, operators, occupiers, employees, residents, etc.Current approaches that retailers take to protect location data, when collected; and why it's important to simplify language,  increase transparency, & make  consumers aware of tracking in in-store WIFi privacy noticesOverview of The RED Foundation & mission: to ensure the real estate sector benefits from an increased use of data, avoids some of the risks that this presents, and is better placed to serve societySome ethical questions with which the real estate sector needs to still align, along with examplesWhy there’s a need to educate the real estate industry on privacy-enhancing techThe need for privacy engineers and PETs in real estate; and why this will build trust with the different stakeholdersGuidance for privacy engineers who want to work in the real estate sector.Ways to collaborate with The RED Foundation to standardize data ethics practices across the real estate industryWhy there's great opportunity to embed privacy into real estate; and why its current challenges are really obstacles, rather than blockers.Resources Mentioned:Check out The RED FoundationGuest Info:Follow Yusra on LinkedInFollow Luke on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week, I welcome Jared Coseglia, co-founder and CEO at TRU Staffing Partners, a contract staffing & executive placement search firm that represents talent across 3 core industry verticals: data privacy, eDiscovery, & cybersecurity. We discuss the current and future state of the contracting market for privacy engineering rols and the market drivers that affect hiring. You’ll learn about the hiring trends and the allure of 'part-time impact,' 'part-time perpetual,' and 'secondee' contract work. Jared illustrates the challenges that hiring managers face with a 'Do-it-Yourself' staffing process; and he shares his predictions about the job market for privacy engineers over the next 2 years. Jared comes to the conversation with a lot of data that supports his predictions and sage advice for privacy engineering hiring managers and job seekers. Topics Covered:How the privacy contracting market compares and contrasts to the full-time hiring market; and, why we currently see a steep rise in privacy contractingWhy full-time hiring for privacy engineers won't likely rebound until Q4 2024; and, how hiring for privacy typically follows a 2-year cycleWhy companies & employees benefit from fractional contracts; and, the differences between contracting types: 'Part-Time - Impact,' 'Part-Time - Perpetual,' and 'Secondee'How hiring managers typically find privacy engineering candidatesWhy it's far more difficult to hire privacy engineers for contracts; and, how how a staffing partner like TRU can supercharge your hiring efforts and avoid the pitfalls of a "do-it-yourself" approachHow contract work benefits privacy engineers financially, while also providing them with project diversityHow salaries are calculated for privacy engineers; and, the driving forces behind pay discrepancies across privacy rolesJared's advice to 2024 job seekers, based on his market predictions; and, why privacy contracting increases 'speed to hire' compared to hiring FTEsWhy privacy engineers can earn more money by changing jobs in 2024 than they could by seeking raises in their current companies; and discussion of 2024 salary ranges across industry segmentsJared's advice on how privacy engineers can best position themselves to contract hiring managers in 2024Recommended resources for privacy engineering employers and job seekersResources Mentioned:Read: "State of the Privacy Job Market Q3 2023”Subscribe to TRU InsightsGuest Info:Connect with Jared on LinkedInLearn more about TRU Staffing PartnersEngineering Managers: Check out TRU Staffing Data Privacy Staffing solutionsPE Candidates: Apply to Open Privacy Positions Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week’s guests are Mathew Mytka and Alja Isakovoić, Co-Founders of Tethix, a company that builds products that embed ethics into the fabric of your organization. We discuss Matt and Alja’s core mission to bring ethical tech to the world, and Tethix’s services that work with your Agile development processes. You’ll learn about Tethix’s solution to address 'The Intent to Action Gap,' and what Elemental Ethics can provide organizations beyond other ethics frameworks. We discuss ways to become a proactive Responsible Firekeeper, rather than remaining a reactive Firefighter, and how ETHOS, Tethix's suite of apps can help organizations embody and embed ethics into everyday practice. TOPICS COVERED:What inspired Mat & Alja to co-found Tethix and the company's core missionWhat the 'Intent to Action Gap' is and how Tethix address itOverview of Tethix's Elemental Ethics framework; and how it empowers product development teams to 'close the 'Intent to Action Gap' and move orgs from a state of 'Agile Firefighting' to 'Responsible Firekeeping'Why Agile is an insufficient process for embedding ethics into software and product development; and how you can turn to Elemental Ethics and Responsible Firekeeping to embed 'Ethics-by-Design' into your Agile workflowsThe definition of 'Responsible Firekeeping' and its benefits; and how Ethical Firekeeping transitions Agile teams from a reactive posture to a proactive oneWhy you should choose Elemental Ethics over conventional ethics frameworksTethix's suite of apps called ETHOS: The Ethical Tension and Health Operating System apps, which help teams embed ethics into their collaboration tech stack (e.g., JIRA, Slack, Figma, Zoom, etc.)How you can become a Responsible FirekeeperThe level of effort required to implement Elemental Ethics & Responsible Firekeeping into Product Development based on org size and level of maturityAlja's contribution to the ResponsibleTech.Work, an open source Responsible Product Development Framework, core elements of the Framework, and why we need itWhere to learn more about Responsible FirekeepingRESOURCES MENTIONED:Read: "Day in the Life of a Responsible Firekeeper"Review the ResponsibleTech.Work FrameworkSubscribe to the Pathfinders NewmoonsletterGUEST INFO:Connect with Mat on LinkedInConnect with Alja on LinkedInCheck out Tethix’s Website  Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week’s guest is Isabel Barberá, Co-founder, AI Advisor, and Privacy Engineer at Rhite , a consulting firm specializing in responsible and trustworthy AI and privacy engineering, and creator of The Privacy Library Of Threats 4 Artificial Intelligence Framework and card game. In our conversation, we discuss: Isabel’s work with privacy-by-design, privacy engineering, privacy threat modeling, and building trustworthy AI; and info about Rhite’s forthcoming Self-Assessment Open-Source framework for AI maturity, SARAI®. As we wrap up the episode, Isabel shares details about PLOT4ai, her AI threat modeling framework and card game created based on a library of threats for artificial intelligence. Topics Covered:How Isabel became interested in privacy engineering, data protection, privacy by design, threat modeling, and trustworthy AIHow companies are thinking (or not) about incorporating privacy-by-design strategies & tactics and privacy engineering approaches within their orgs todayWhat steps can be taken so companies start investing in privacy engineering approaches; and whether AI has become a driver for such approaches.Background on Isabel’s company, Rhite, and its mission to build responsible solutions for society and its individuals using a technical mindset. What “Responsible & Trustworthy AI” means to Isabel The 5 core values that make up the acronym, R-H-I-T-E, and why they’re important for designing and building products & services.Isabel's advice for organizations as they approach AI risk assessments, analysis, & remediation The steps orgs can take in order to  build responsible AI products & servicesWhat Isabel hopes to accomplish through Rhite's new framework: SARAI® (for AI maturity), an open source AI Self-Assessment Tool and Framework, and an extension the Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai) Framework (i.e., a library of AI risks)What motivated Isabel to focus on threat modeling for privacyHow PLOT4ai builds on LINDDUN (which focuses on software development) and extends threat modeling to the AI lifecycle stages: Design, Input, Modeling, & OutputHow Isabel’s experience with the LINDDUN Go card game inspired her to develop of a PLOT4ai card game to make it more accessible to teams.Isabel calls for collaborators to contribute to the PLOT4ai open source database of AI threats as the community grows.Resources Mentioned:Privacy Library Of Threats 4 Artificial Intelligence (PLOT4ai)PLOT4ai's Github Threat Repository"Threat Modeling Generative AI Systems with PLOT4ai”  Self-Assessment for Responsible AI (SARAI®)LINDDUN Privacy Threat Model Framework"S2E19: Privacy Threat Modeling - Mitigating Privacy Threats in Software with Kim Wuyts (KU Leuven)”"Data Privacy: a runbook for engineers"Guest Info:Isabel's LinkedIn ProfileRhite’s Website  Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week, I sat down with Vaibhav Antil ('Vee'), Co-founder & CEO at Privado, a privacy tech platform that's leverages privacy code scanning & data mapping to bridge the privacy engineering gap.  Vee shares his personal journey into privacy, where he started out in Product Management and saw need for privacy automation in DevOps. We discuss obstacles created by the rapid pace of engineering teams and a lack of a shared vocabulary with Legal / GRC. You'll learn how code scanning enables privacy teams to move swiftly and avoid blocking engineering. We then discuss the future of privacy engineering, its growth trends, and the need for cross-team collaboration. We highlight the importance of making privacy-by-design programmatic and discuss ways to scale up privacy reviews without stifling product innovation. Topics Covered:How Vee moved from Product Manager to Co-Founding Privado, and why he focused on bringing Privacy Code Scanning to market.What it means to "Bridge the Privacy Engineering Gap" and 3 reasons why Vee believes the gap exists.How engineers can provide visibility into personal data collected and used by applications via Privacy Code Scans.Why engineering teams should 'shift privacy left' into DevOps.How a Privacy Code Scanner differs from traditional static code analysis tools in security.How Privado's Privacy Code Scanning & Data Mapping capabilities (for the SDLC) differ from personal data discovery, correlation, & data mapping tools (for the data lifecycle).How Privacy Code Scanning helps engineering teams comply with new laws like Washington State's 'My Health My Data Act.'A breakdown of  Privado’s FREE "Technical Privacy Masterclass."Exciting features on Privado’s roadmap, which support its vision to be the platform for collaboration between privacy operations & engineering teams.Privacy engineering  trends and Vee’s predictions for the next two years. Privado Resources Mentioned:Free Course: "Technical Privacy Masterclass" (led by Nishant Bhajaria)Guide: Introduction to Privacy Code ScanningGuide: Code Scanning Approach to Data MappingSlack: Privado's Privacy Engineering CommunityOpen Source Tool: Play Store Data Safety Report BuilderGuest Info:Connect with Vee on LinkedInCheck out Privado's website Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week’s guest is Rebecca Balebako,  Founder and Principal Consultant at Balebako Privacy Engineer, where she enables data-driven organizations to build the privacy features that their customers love. In our conversation, we discuss all things privacy red teaming, including: how to disambiguate adversarial privacy tests from other software development tests; the importance of privacy-by-infrastructure; why privacy maturity influences the benefits received from investing in privacy red teaming; and why any database that identifies vulnerable populations should consider adversarial privacy as a form of protection. We also discuss the 23andMe security incident that took place in October 2023 and affected over 1 mil Ashkenazi Jews (a genealogical ethnic group). Rebecca brings to light how Privacy Red Teaming and privacy threat modeling may have prevented this incident.  As we wrap up the episode, Rebecca gives her advice to Engineering Managers looking to set up a Privacy Red Team and shares key resources. Topics Covered:How Rebecca switched from software development to a focus on privacy & adversarial privacy testingWhat motivated Debra to shift left from her legal training to privacy engineeringWhat 'adversarial privacy tests' are; why they're important; and how they differ from other software development testsDefining 'Privacy Red Teams' (a type of adversarial privacy test) & what differentiates them from 'Security Red Teams'Why Privacy Red Teams are best for orgs with mature privacy programsThe 3 steps for conducting a Privacy Red Team attackHow a Red Team differs from other privacy tests like conducting a vulnerability analysis or managing a bug bounty programHow 23andme's recent data leak, affecting 1 mil Ashkanazi Jews, may have been avoided via Privacy Red Team testingHow BigTech companies are staffing up their Privacy Red TeamsFrugal ways for small and mid-sized organizations to approach adversarial privacy testingThe future of Privacy Red Teaming and whether we should upskill security engineers or train privacy engineers on adversarial testingAdvice for Engineer Managers who seek to set up a Privacy Red Team for the first timeRebecca's Red Teaming resources for the audienceResources Mentioned:Listen to: "S1E7: Privacy Engineers: The Next Generation" with Lorrie Cranor (CMU)Review Rebecca's Red Teaming Resources Guest Info:Connect with Rebecca on LinkedInVisit Balebako Privacy Engineer's websitePrivado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week’s guest is Steve Hickman, the founder of Epistimis, a privacy-first process design tooling startup that evaluate rules and enables the fixing of privacy issues before they ever take effect. In our conversation, we discuss: why the biggest impediment to protecting and respecting privacy within organizations is the lack of a common language; why we need a common Privacy Ontology in addition to a Privacy Taxonomy; Epistimis' ontological approach and how it leverages semantic modeling for privacy rules checking; and, examples of how Epistimis Privacy Design Process tooling complements privacy tech solutions on the market, not compete with them.Topics Covered:How Steve’s deep engineering background in aerospace, retail, telecom, and then a short stint at Meta, led him to found Epistimis Why its been hard for companies to get privacy right at scaleHow Epistimis leverages 'semantic modeling' for rule checking and how this helps to scale privacy as part of an ontological approachThe definition of a Privacy Ontology and Steve's belief that all should use one for common understanding at all levels of the businessAdvice for designers, architects, and developers when it comes to creating and implementing privacy ontology, taxonomies & semantic modelsHow to make a Privacy Ontology usableHow Epistimis' process design tooling work with discovery and mapping platforms like BigID & Secuvy.aiHow Epistimis' process design tooling work along with a platform like Privado.ai, which scans a company's product code and then surfaces privacy risks in the code and detects processing activities for creating dynamic data mapsHow Epistimis' process design tooling works with PrivacyCode, which has a library of privacy objects, agile privacy implementations (e.g., success criteria & sample code), and delivers metrics on the privacy engineering process is goingSteve calls for collaborators who are interested in POCs and/or who can provide feedback on Epistimis' PbD processing toolingSteve describes what's next on the Epistimis roadmap, including wargamingResources Mentioned:Read Dan Solove's article, "Data is What Data Does: Regulating Based on Harm and Risk Instead of Sensitive Data"Guest Info:Connect with Steve on LinkedInReach out to Steve via EmailLearn more about Epistimis Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week's guest is Shashank Tiwari, a seasoned engineer and product leader who started with algorithmic systems of Wall Street before becoming Co-founder & CEO of Uno.ai, a pathbreaking autonomous security company. He started with algorithmic systems on Wall Street and then transitioned to building Silicon Valley startups, including previous stints at Nutanix, Elementum, Medallia, & StackRox. In this conversation, we discuss ML/AI, large language models (LLMs), temporal knowledge graphs, causal discovery inference models, and the Generative AI design & architectural choices that affect privacy.  Topics Covered:Shashank describes his origin story, how he became interested in security, privacy, & AI while working on Wall Street; & what motivated him to found UnoThe benefits to using "temporal knowledge graphs," and how knowledge graphs are used with LLMs to create a "causal discovery inference model" to prevent privacy problemsThe explosive growth of Generative AI, it's impact on the privacy and confidentiality of sensitive and personal data, & why a rushed approach could result in mistakes and societal harm  Architectural privacy and security considerations for: 1) leveraging  Generative AI, and those to avoid certain mechanisms at all costs; 2) verifying, assuring, & testing against "trustful data" rather than "derived data;" and 3) thwarting common Generative AI attack vectorsShashank's predictions for Enterprise adoption of Generative AI over the next several yearsShashank's thoughts on proposed and future AI-related legislation may affect the Generative AI market overall and Enterprise adoption more specificallyShashank's thoughts on the development of AI standards across tech stacksResources Mentioned:Check out episode S2E29: Synthetic Data in AI: Challenges, Techniques & Use Cases with Andrew Clark and Sid Mangalik (Monitaur.ai)Guest Info:Connect with Shashank on LinkedInLearn more about Uno.ai Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week I welcome Dr. Andrew Clark, Co-founder & CTO of Monitaur, a trusted domain expert on the topic of machine learning, auditing and assurance; and Sid Mangalik, Research Scientist at Monitaur and PhD student at Stony Brook University. I discovered Andrew and Sid's new podcast show, The AI Fundamentalists Podcast. I very much enjoyed their lively episode on Synthetic Data & AI, and am delighted to introduce them to my audience of privacy engineers. In our conversation, we explore why data scientists must stress test their model validations, especially for consequential systems that affect human safety and reliability. In fact, we have much to learn from the aerospace engineering field who has been using ML/AI since the 1960s. We discuss the best and worst use cases for using synthetic data'; problems with LLM-generated synthetic data; what can go wrong when your AI models lack diversity; how to build fair, performant systems; & synthetic data techniques for use with AI.Topics Covered:What inspired Andrew to found Monitaur and focus on AI governanceSid’s career path and his current PhD focus on NLPWhat motivated Andrew & Sid to launch their podcast, The AI FundamentalistsDefining 'synthetic data' & why academia takes a more rigorous approach to synthetic data than industryWhether the output of LLMs are synthetic data & the problem with training LLM base models with this dataThe best and worst 'synthetic data' use cases for ML/AIWhy the 'quality' of input data is so important when training AI models Thoughts on OpenAI's announcement that it will use LLM-generated synthetic data; and critique of OpenAI's approach, the AI hype machine, and the problems with 'growth hacking' corner-cuttingThe importance of diversity when training AI models; using 'multi-objective modeling' for building fair & performant systemsAndrew unpacks the "fairness through unawareness fallacy"How 'randomized data' differs from 'synthetic data'4 techniques for using synthetic data with ML/AI: 1) the Monte Carlo method; 2) Latin hypercube sampling; 3) gaussian copulas; & 4) random walkingWhat excites Andrew & Sid about synthetic data and how it will be used with AI in the futureResources Mentioned:Check out Podchaser Listen to The AI Fundamentalists PodcastCheck out MonitaurGuest Info:Follow Andrew on LinkedInFollow Sid on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week, I welcome Jutta Williams, Head of Privacy & Assurance at Reddit, Co-founder of Humane Intelligence and BiasBounty.ai, Privacy & Responsible AI Evangelist, and Startup Board Advisor. With a long history of accomplishments in privacy engineering, Jutta has a unique perspective on the growing field.In our conversation, we discuss her transition from security engineering to privacy engineering; how privacy cultures differ across social media companies where she's worked: Google, Facebook, Twitter, and now Reddit; the overlap of the privacy engineering & responsible AI; how her non-profit, Humane Intelligence, supports AI model owners; her experience launching the largest Generative AI Red Teaming challenge ever at DEF CON; and, how a curious knowledge-enhancing approach to privacy will create engagement and allow for fun. Topics Covered:How Jutta’s unique transition from security engineering landed her in the privacy engineering space. A comparison of privacy cultures across Google, Facebook, Twitter (now 'X'), and Reddit based on her privacy engineering experiences there.Two open Privacy Engineering roles at Reddit, and Jutta's advice for those wanting to transition from security engineering to privacy engineering.Whether Privacy Pros will be responsible for owning new regulatory obligations under the EU's Digital Services Act (DSA) & the Digital Markets Act (DMA); and the role of the Privacy Engineer when overlapping with Responsible AI issuesHumane Intelligence,  Jutta's 'side quest,' which she co-leads with Dr. Rumman Chowdhury, and supports AI model owners seeking 'Product Readiness Reviews' at scale.When, during the product development life cycle, companies should perform 'AI Readiness Reviews'How to de-biased at scale or whether attempting to do so is 'chasing windmills'Who should be hunting for biases in an AI Bias Bounty challengeDEF CON 31's AI Village's 'Generative AI Red Teaming Challenge,' which was a bias bounty that she co-designed; lessons learned; and what Jutta & team have planned for DEF CON 32 next yearWhy it's so important for people to 'love their side quests'Resources Mentioned:DEF CON Generative Red Team ChallengeHumane IntelligenceBias Buccaneers ChallengeGuest Info:Connect with Jutta on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
Today, I welcome Victor Morel, PhD and Simone Fischer-Hübner, PhD to discuss their recent paper, "Automating Privacy Decisions – where to draw the line?" and their proposed classification scheme. We dive into the complexity of automating privacy decisions and emphasize the importance of maintaining both compliance and usability (e.g., via user control and informed consent). Simone is a Professor of Computer Science at Karlstad University with over 30 years of privacy & security research experience. Victor is a post-doc researcher at Chalmers University's Security & Privacy Lab, focusing on privacy, data protection, and technology ethics.Together, they share their privacy decision-making classification scheme and research across two dimensions: (1) the type of privacy decisions: privacy permissions, privacy preference settings, consent to processing, or rejection to processing; and (2) the level of decision automation: manual, semi-automated, or fully-automated. Each type of privacy decision plays a critical role in users' ability to control the disclosure and processing of their personal data. They emphasize the significance of tailored recommendations to help users make informed decisions and discuss the potential of on-the-fly privacy decisions. We wrap up with organizations' approaches to achieving usable and transparent privacy across various technologies, including web, mobile, and IoT. Topics Covered:Why Simone & Victor focused their research on automating privacy decisions How GDPR & ePrivacy have shaped requirements for privacy automation toolsThe 'types' privacy decisions & associated 'levels of automation': privacy permissions, privacy preference settings, consent to processing, & rejection to processingThe 'levels of automation' for each privacy decision type: manual, semi-automated & fully-automated; and the pros / cons of automating each privacy decision typePreferences & concerns regarding IoT Trigger Action PlatformsWhy the only privacy decisions that you should 'fully automate' are the rejection of processing: i.e., revoking consent or opting outBest practices for achieving informed controlAutomation challenges across web, mobile, & IoTMozilla's automated cookie banner management & why it's problematic (i.e., unlawful)Resources Mentioned:"Automating Privacy Decisions – where to draw the line?"CyberSecIT at Chalmers University of Technology"Tapping into Privacy: A Study of User Preferences and Concerns on Trigger-Action Platforms"Consent O Matic browser extension Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week, I welcome philosopher, author, & AI ethics expert, Reid Blackman, Ph.D., to discuss Ethical AI. Reid authored the book, "Ethical Machines," and is the CEO & Founder of Virtue Consultants, a digital ethical risk consultancy. His extensive background in philosophy & ethics, coupled with his engagement with orgs like AWS, U.S. Bank, the FBI, & NASA, offers a unique perspective on the challenges & misconceptions surrounding AI ethics.In our conversation, we discuss 'passive privacy' & 'active privacy' and the need for individuals to exercise control over their data. Reid explains how the quest to train data for ML/AI can lead to privacy violations, particularly for BigTech companies. We touch on many concepts in the AI space including: automated decision making vs. keeping "humans in the loop;" combating AI ethics fatigue; and advice for technical staff involved in AI product development. Reid stresses the importance of protecting privacy, educating users, & deciding whether to utilize external APIs or on-prem servers. We end by highlighting his HBR article - "Generative AI-xiety" - and discuss the 4 primary areas of ethical concern for LLMs: the hallucination problem; the deliberation problem; the sleazy salesperson problem; & the problem of shared responsibilityTopics Covered:What motivated Reid to write his book, "Ethical Machines"The key differences between 'active privacy' & 'passive privacy'Why engineering incentives to collect more data to train AI models, especially in big tech, poses challenges to data minimizationThe importance of aligning privacy agendas with business prioritiesWhy what companies infer about people can be a privacy violation; what engineers should know about 'input privacy' when training AI models; and, how that effects the output of inferred dataAutomated decision making: when it's necessary to have a 'human in the loop'Approaches for mitigating 'AI ethics fatigue'The need to backup a company's stated 'values' with actions; and why there should always be 3 - 7 guardrails put in place for each stated valueThe differences between 'Responsible AI' & 'Ethical AI,' and why companies seem reluctant to talk about ethicsReid's article, "Generative AI-xiety," & the 4 main risks related to generative AIReid's advice for technical staff building products & services that leverage LLM'sResources Mentioned:Read the book, "Ethical Machines"Reid's podcast, Ethical MachinesGuest Info:Follow Reid on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week, we're chatting with Engin Bozdag, Senior Staff Privacy Architect at Uber, and Stefano Bennati, Privacy Engineer at HERE Technologies. Today, we explore their recent IWPE'23 talk, "Can Location Data Truly be Anonymized: a risk-based approach to location data anonymization" and discuss the technical & business challenges to obtain anonymization. We also discuss the role of Privacy Engineers, how to choose a career path, and the importance of embedding privacy into product development & DevPrivOps; collaborating with cross-functional teams; & staying up-to-date with emerging trends.Topics Covered:Common roadblocks privacy engineers face with anonymization techniques & how to overcome themHow to get budgets for anonymization tools; challenges with scaling & regulatory requirements & how to overcome themWhat it means to be a 'Privacy Engineer' today; good career paths; and necessary skill setsHow third-party data deletion tools can be integrated into a company's distributed architectureWhat Privacy Engineers should understand about vendor privacy requirements for LLMs before bringing them into their orgsThe need to monitor code changes in data or source code via code scanning; how HERE Technologies uses Privado to monitor the compliance of its products & data lineage; and how Privado detects new assets added to your inventory & any new API endpointsAdvice on how to deal with conflicts between engineering, legal & operations teams and hon how to get privacy issues fixed within an orgStrategies for addressing privacy issues within orgs, including collaboration, transparency, and continuous refinementResources Mentioned:IAPP Defining Privacy Engineering InfographicEU AI ActEthics Guidelines for Trustworthy AIPrivacy Engineering SuperheroesFTC Investigates OpenAI over Data Leak and ChatGPT’s InaccuracyGuest Info:Follow EnginFollow Stefano Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week’s guest is Elias Grünewald, Privacy Engineering Research Associate at Technical University, Berlin, where he focuses on cloud-native privacy engineering, transparency, accountability, distributed systems, & privacy regulation. In this conversation, we discuss the challenge of designing privacy into modern cloud architectures; how shifting left into DevPrivOps can embed privacy within agile development methods; how to blend privacy engineering & cloud engineering; the Hawk DevOps Framework; and what the Shared Responsibilities Model for cloud lacks. Topics Covered:Elias's courses at TU Berlin: "Programming Practical Privacy: Web-based Application Engineering & Data Management" & "Advanced Distributed Systems Prototyping: Cloud-native Privacy Engineering"Elias' 2022 paper, "Cloud Native Privacy Engineering through DevPrivOps" - his approach, findings, and frameworkThe Shared Responsibilities Model for cloud and how to improve it to account for privacy goalsDefining DevPrivOps & how it works with agile developmentHow DevPrivOps can enable formal privacy-by-design (PbD) & default strategiesElias' June 2023 paper, "Hawk: DevOps-Driven Transparency & Accountability in Cloud Native Systems," which helps data controllers align cloud-native DevOps with regulatory requirements for transparency & accountabilityEngineering challenges when trying to determine the details of personal data processing when responding to access & deletion requestsA deep-dive into the Hawk 3-phase approach for implementing privacy into each DevOps phase: Hawk Release; Hawk Operate; & Hawk MonitorHow open sourced project, TOUCAN, is documenting conceptual best practices for corresponding phases in the SDLC, and a call for collaborationHow privacy engineers can convince their management to adopt a DevPrivOps approachRead Elias' papers, talks, & projects:Cloud Native Privacy Engineering through DevPrivOpsHawk: DevOps-driven Transparency and Accountability in Cloud Native Systems CPDP Talk: Privacy Engineering for Transparency & Accountability TILT: A GDPR-Aligned Transparency Information Language & Toolkit for Practical Privacy EngineeringTOUCAN Guest Info:Connect with Elias on LinkedInContact Elias at TU Berlin Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week, my guest is George Ratcliffe, Head of the Privacy GRC & Cryptography Executive Search Practice at recruitment firm, Stott & May.In this conversation, we discuss the current market climate & hiring trends for technical privacy roles; the need for higher technical capabilities across the industry;  pay ranges within different technical privacy roles; and George’s tips and tools for applicants interested in, entering, and/or transitioning into the privacy industry. Topics Covered:Whether the hiring trends are picking back up for technical privacy rolesThe three 'Privacy Engineering' roles that companies seek to hire for and core competencies: Privacy Engineer, Privacy Software Engineer, & Privacy Research EngineerThe demand for 'Privacy Architects'IAPP's new Privacy Engineering infographic & if it maps with how companies approach hiring Overall hiring trends for privacy engineers & technical privacy rolesAdvice technologists who want to grow into Privacy Engineer, Researcher, or Architect rolesCapabilities that companies need or want in candidates that they can't seem to find; & whether there are roles that are harder to fill because of a lack of candidates & skill setsWhether a PhD is necessary to become a 'Privacy Research Engineer'Typical pay ranges across technical privacy roles: Privacy Engineer, Privacy Software Engineer, Privacy Researcher, Privacy ArchitectDifferences in pay for a Privacy Engineering Manager vs an Independent Contributor (IC) and the web apps for crowd-sourced info about roles & salary rangesWhether companies seek to fill entry level positions for technical privacy rolesHow privacy technologists can stay up-to-date on hiring trendsResources Mentioned:Check out episode S2E11: Lessons Learned as a Privacy Engineering Manager with Menotti Minutillo (ex-Twitter & Uber)IAPP Defining Privacy Engineering Infographic Check out Blind and Levels for compensation benchmarkingGuest Info:Connect with George on LinkedInReach out to Stott & May for your privacy recruiting needs Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnBuzzsprout - Launch your podcast Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
Get ready for an eye-opening conversation with Sanjay Saini, the founder and CEO of Privaini, a groundbreaking privacy tech company. Sanjay's journey is not only impressive due to his role in creating high-performance teams that have built entirely new product categories, but also for the invaluable lessons he learned from his grandfather about the pillars of successful companies - trust and human connections. In our discussion, Sanjay shares how Privaini is raising the privacy bar by constructing the world's largest repository of company privacy policies and practices. It's a fascinating dive into the future of privacy risk management.Imagine being able to gain full coverage of your external privacy risks with continuous monitoring. Wouldn't that revolutionize your approach to risk management? That's exactly what Privaini is doing! Sanjay explains how Privaini utilizes AI to analyze, standardize, and derive meaningful "privacy views" and insights from vast volumes of publicly-available data. Listen in to understand how Privaini's innovative approach is helping companies gain visibility into their entire business network to make quicker, more informed decisions. Topics Covered:What motivated Sanjay to found companies that bring trusted systems to market and why he founded Privaini  to  focus on continuous privacy risk monitoringHow to quantitatively analyze & monitor privacy risk throughout an entire 'business network' and what Sanjay means by 'business network'Which stakeholders benefit from using the Privaini platformThe benefits to calculating a "quantified privacy risk score" for each company in your business network to effectively monitor privacy riskHow Privaini leverages AI to discover external data about companies' privacy posture and why it must be used in a responsible and deliberate wayWhy effective privacy risk monitoring of a company's business network requires an “outside-in” approachThe importance of continuous monitoring & the benefits to using an 'outside-in' approachWhat it takes to set up an enterprise's network with Privaini for full coverage of external privacy risksThe recent Criteo fines and how Privaini could have helped Criteo surface privacy risks about its vendorsWhy Sanjay believes learning about the “right side” of the equation is necessary in order to "shift privacy left."Guest Info:Connect with Sanjay on LinkedInLearn more about Privaini Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week’s guest is Tom Kemp: author; entrepreneur; former Co-Founder & CEO of Centrify (now called Delinia), a leading cybersecurity cloud provider; and a Silicon Valley-based Seed Investor and Policy Advisor. Tom led campaign marketing efforts in 2020 to pass California Proposition 24, the California Privacy Rights Act, (CPRA), and is currently co-authoring the California Delete Act bill.In this conversation, we discuss chapters within Tom’s new book, Containing Big Tech: How to Protect Our CIVIL RIGHTS, ECONOMY, and DEMOCRACY; how big tech is using AI to feed into the attention economy; what should go into a U.S. federal privacy law and how it should be enforced; and a comprehensive look at some of Tom’s privacy tech investments. Topics Covered:Tom's new book - Containing Big Tech: How to Protect Our Civil Rights, Economy and DemocracyHow and why Tom’s book is centered around data collection, artificial intelligence, and competition. U.S. state privacy legislation that Tom helped get passed & what he's working on now, including: CPRA, the California Delete Act, & Texas Data Broker RegistryWhether there will ever be a U.S. federal, omnibus privacy law; what should be included in it; and how it should be enforcedTom's work as a privacy tech and security tech Seed Investor with Kemp Au Ventures and what inspires him to invest in a startup or notWhat inspired Tom to invest in PrivacyCode, Secuvy & Privaini Why having a team and market size is something Tom looks for when investing. The importance of designing for privacy from a 'user-interface perspective' so that it’s consumer friendlyHow consumers looking to trust companies are driving a shift left movementTom's advice for how companies can better shift left in their orgs & within their business networksResources Mentioned:The California Consumer Privacy Act (amended by the CPRA)The California Delete ActGuest Info:Follow Tom on LinkedInKemp Au VenturesPre-order Containing Big Tech: How to Protect Our CIVIL RIGHTS, ECONOMY, and DEMOCRACY  Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week’s guest is Jeff Jockisch, Partner at Avantis Privacy and co-host of the weekly LinkedIn Live event, Your Bytes = Your Rights, a town hall-style discussion around ownership, digital rights, and privacy. Jeff is currently a data privacy researcher at PrivacyPlan, where he focuses specifically on privacy data sets. In this conversation, we delve into current risks to location privacy; how precise location data really is; how humans can have more control over their data; and what organizations can do to protect humans’ data privacy. For access to a dataset of data resources and privacy podcasts, check out Jeff’s robust database — the Shifting Privacy Left podcast was recently added.Topics Covered:Jeff’s approach to creating privacy data sets and what “gaining insight into the privacy landscape” means.How law enforcement can be a threat actor to someone’s privacy, using the example of Texas' abortion lawWhether data brokers are getting exact location information or are inferring someone’s location.Why geolocation brokers had not considered themselves data brokers.Why anonymization is insufficient for location privacy. How 'consent theater' coupled with location leakage is an existential threat to our privacy.How people can protect themselves from having data collected and sold by data and location brokers.Why apps permissions should be more specific when notifying users about personal data collection and use. How Apple and Android devices treat Mobile Ad ID (MAID) differently and how that affects your historical location data.How companies can protect data by using broader geolocation information instead of precise geolocation information. More information about Jeff's LinkedIn Live show, Your Bytes = Your Rights.Resources Mentioned:Avantis PrivacyPrivacy PlanThreat modeling episode with Kim Wuyts"Your Bytes = Your Rights" LinkedIn LiveThe California Delete ActPrivacy Podcast DatabaseContaining Big Tech Guest Info:Follow Jeff on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
This week's guest is Kim Wuyts, Senior Postdoctoral Researcher at the DistriNet Research Group at the Department of Computer Science at KU Leuven. Kim is one of the leading minds behind the development and extension of LINDDUN, a privacy threat modeling framework that mitigates privacy threats in software systems.In this conversation, we discuss threat modeling based on the Threat Modeling Manifesto Kim co-authored; the benefits to using the LINDDUN privacy threat model framework; and how to bridge the gap between privacy-enhancing technologies (PETs) in academia and the commercial world.    Topics Covered:Kim's career journey & why she moved into threat modeling.The definition of 'threat modeling,' who should threat model, and what's included in her "Threat Modeling Manifesto."The connection between threat modeling & a 'shift left' mindset / strategy.Design patterns that benefit threat modeling & anti-patterns that inhibit.Benefits to using the LINDDUN Privacy Threat Modeling framework for mitigating privacy threats in software, including the 7 'privacy threat types,' associated 'privacy threat trees,' and examples.How "privacy threat trees' refine each threat type into concrete threat characteristics, examples, criteria & impact info.Benefits & differences between LINDDUN GO and LINDDUN PRO.How orgs can combine threat modeling approaches with PETs to address privacy risk.Kim's work as Program Chair for the International Workshop on Privacy Engineering (IWPE), highlighting some anticipated talks.The overlap of privacy & AI threats, and Kim's recommendation of The Privacy Library of Threats 4 AI ("PLOT4AI") Threat Modeling Card DeckRecommended resources for privacy threat modeling, privacy engineering & PETs.How the LINDDUN model & methodologies have been adopted by global orgs.How to bridge the gap between the academic & commercial world to advance & deploy PETs.Resources Mentioned:The Threat Modeling ManifestoLINDDUN Privacy Threat Model  STRIDE threat modelThreat Modeling Connect CommunityElevation of Privilege card gamePlot4AI (privacy & AI threat modeling) card deckInternational Workshop on Privacy Engineering (IWPE)Guest Info:Follow Kim on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
I am delighted to welcome my next guest, Brad Dominy. Brad is a MacOS and iOS developer and Founder & Inventor of Neucards, a privacy-preserving app that enables secure shareable and updatable digital contacts. In this conversation, we delve into why personally managing our digital contacts has been so difficult and Brad's novel approach to securely manage our contacts, architected with privacy by design and default.Contacts have always been the “junk drawer” of digital data, where people have information that they want to keep up-to-date, but are rarely able to based on current technology. The vCard standard is outdated, but is the only standard that works across iOS, Android, and Microsoft. It is still the most commonly used contact format, but lacks any capacity for updating contacts. Once someone exchanges their contact information with you, it then falls on you to keep that up-to-date. This is why Brad created Neucards: to gain the benefits of sharing information easily, privately (with E2EE) and receiving updates across all platforms.Topics Covered:Why it is difficult to keep our digital contacts up-to-date across devices and platforms.Brad describes his career journey that inspired him to invent Neucards; the problems Neucards solves for; and why this became his passion project for over a decadeWhy companies haven’t innovated more in the digital contacts spaceThe 3 main features that make Neucards different from other contact appsHow Neucards enables you to share digital contacts data easily & securelyNeucards' privacy by design and default approach to sharing and updating digital contactsHow you can use NFC tap tags with Neucards to make the process of sharing digital contacts much easierWhether Neucards can solve the "New phone, who dis?" problemWhether we will see an update to the vCard standard or new standards for digital contactsNeucards' roadmap, including a 'mask communications' featureThe importance of language; the difference between 'privacy-preserving' vs. 'privacy-enabling' architectural approachesResources Mentioned:Learn about NeucardsDownload the Neucards iOS appGuest Info:Follow Brad on LinkedIn Privado.ai Privacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left Media Where privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2023 Principled LLC. All rights reserved.
loading
Comments (1)

Debra Farber

I greatly enjoy producing & hosting this show. I welcome feedback & sponsorship inquiries: privacy@shiftingprivacyleft.com

Aug 29th
Reply
Download from Google Play
Download from App Store