She Said Privacy/He Said Security

This is the She Said Privacy / He Said Security podcast with Jodi and Justin Daniels. Like any good marriage, Jodi and Justin will debate, evaluate, and sometimes quarrel about how privacy and security impact business in the 21st century.

Hands-On AI Skills Every Legal Team Needs

Mariette Clardy-Davis is Assistant General Counsel at Primerica, providing strategic guidance on the Securities Business. Recognizing AI competence as a professional duty, she launched "Unboxing Generative AI for In-House Lawyers" virtual workshops and an online directory empowering lawyers to move from AI overwhelm to practical application through hands-on learning. In this episode… Legal teams are turning to generative AI to speed up their work, yet many struggle with getting consistent, usable results. Learning AI skills requires hands-on practice with prompting frameworks, styling guides, and instructions that improve output quality. That's why attorneys need creative training approaches that help these skills stick and carry over into their day-to-day work.  Building AI fluency isn't about mastering the technology itself. It's about shifting mindset and approach. One common challenge legal teams encounter is expecting AI to deliver consistent outputs every time, yet AI doesn't work like a copy machine. It responds through patterns, so the same prompt might produce different results. That's why creative, narrative-based training is effective for learning prompting frameworks. When attorneys pair detailed prompt instructions with gold standard examples, AI tools get the reference points they need for tone, style, and structure. Saving strong prompts into a library creates leverage and reduces the time spent rebuilding instructions for recurring tasks. This helps attorneys reduce rework, improve accuracy, and shift from basic efficiency tasks to work that supports strategy and collaboration.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Mariette Clardy-Davis, Assistant General Counsel at Primerica, about how in-house legal teams can embrace generative AI education. Mariette explains how creative, story-driven workshops make AI learning more engaging and why understanding prompting frameworks is essential for consistent results. She discusses common misconceptions lawyers have about generative AI tools and how building a task-based directory with reusable prompts helps legal teams save time on repetitive work. Mariette also explains how attorneys can use AI not just to speed up tasks but to support more substantive legal work.

11-20
27:02

Adapting Cybersecurity Measures for the Age of AI

Khurram Chhipa currently serves as General Counsel at Halborn, a leading cybersecurity company in the Web3 space. With expertise spanning blockchain security, compliance, and digital risk management, he brings a unique perspective to the intersection of law and technology. Outside of work, Khurram enjoys spending time with family and friends. In this episode… Artificial intelligence is changing how cybersecurity teams detect and respond to threats. What once required manual monitoring has evolved into an adaptive solution that uses predictive modeling to identify risks sooner. While AI can strengthen security defenses, it also raises questions about accuracy and the need for human oversight.  For legal and security teams working in fast-moving sectors like blockchain, AI offers efficiency yet also introduces new risks. Large language models (LLMs) can help general counsels generate contracts and prepare for negotiations, yet they require human oversight to spot and correct errors. That's why companies need to develop clear playbooks, train teams, and implement a continuous review process to ensure responsible AI use. For security teams, the same principle applies. While predictive AI tools can identify threats earlier, security teams should also test their incident response readiness through tabletop exercises and encourage employees to adopt a don't trust, verify" mindset to guard against threats like deepfakes.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Khurram Chhipa, General Counsel at Halborn, about how AI is transforming cybersecurity. Khurram explains how AI is reshaping threat detection, why human oversight is essential when using AI in legal and security contexts, and provides practical strategies for implementing safeguards. He also describes the growing AI arms race and its impact on cybersecurity, and he provides tips on how companies can mitigate AI deepfake threats through custom training and implementing advanced security measures.

11-13
25:13

The Path to Restoring Trust in a Connected World

Mark Weinstein is a successful tech entrepreneur, board member, and consultant, and one of the visionary inventors of social networking. He is the author of Restoring Our Sanity Online (Wiley, 2025), a book endorsed by Sir Tim Berners-Lee and Steve Wozniak. Mark is the Founder of MeWe, the first social network with a Privacy Bill of Rights, which grew to over 20 million members. He also founded SuperFamily.com and SuperFriends.com, early social networks recognized by PC Magazine as "Top 100" sites. He is an inventor of 15 groundbreaking digital advertising patents. Mark has delivered the landmark TED Talk, "The Rise of Surveillance Capitalism." He is frequently interviewed and published in major media outlets around the world. Beyond his entrepreneurial achievements, Mark has chaired the New Mexico Accountancy Board and served as an Adjunct Marketing Professor at the University of New Mexico. He holds an MBA from UCLA's Anderson School of Management. In this episode… The internet began as a way to connect family, friends, and communities. Over time, platforms shifted towards surveillance capitalism, where users' personal information can be monetized and people can be targeted and even manipulated. Social media and AI now shape what people see, think, and buy, while algorithms quietly learn how to influence our choices. As technology advances, how can companies and individuals alike protect privacy and rebuild trust in the systems that connect us?  As one of the pioneers of social networking, Mark Weinstein has seen this transformation firsthand. Early models were built around community and connection, while later models monetized personal information for targeting and profit. The next phase focuses on stronger privacy controls, data portability, and user choice. Building safer digital experiences means companies need to avoid unnecessary data collection and manipulative design tactics, and to communicate transparently about how personal information is used and shared. Individuals can also play a role by supporting user ID verification to make social media safer and by teaching children critical thinking skills to help them combat misinformation and manipulation online. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Mark Weinstein, tech entrepreneur, author, board member, and consultant, about rethinking privacy and control in the digital age. Mark reflects on the lessons learned from early social network models and discusses the evolution of the internet from connection-driven communities to surveillance capitalism, explaining how current models exploit user data. He explores his vision for Web4 and its new approach centered on data ownership and portability. He also offers practical advice for protecting children from online harms and the importance of fostering critical thinking in the age of AI.

11-06
35:24

AI, Privacy, and the General Counsel's Role in Responsible Innovation

Lane Blumenfeld is the Chief Legal Officer for Data Driven Holdings (DDH). Through its portfolio companies, headed by TEAM VELOCITY, DDH has become a market leader of data-powered technology and marketing solutions for the automotive industry. Lane was named a Top 50 Corporate Counsel by OnCon. Lane holds a JD from Yale Law School, an MA in international affairs from the Johns Hopkins University School of Advanced International Studies (SAIS), and a BA magna cum laude from Cornell University. In this episode… The pressure on companies to deliver faster, more personalized digital experiences often conflicts with their privacy and security obligations. General counsels sit at the center of this tension, balancing the business value of personal data with the need to protect it. That's why their involvement early in product development is essential. Working with product and engineering teams from the start allows legal teams to build safeguards into design, before products and services reach customers. So, how can companies find the right balance without compromising privacy and security?  AI also adds a new layer of complexity. As companies use it to analyze data, refine customer targeting, and generate marketing content, legal teams and general counsels are adapting to evolving regulations. While clean, reliable data is essential, general counsels need to evaluate accuracy and bias to ensure responsible use. Even as AI advances, fundamental privacy and security principles still apply. That's why it's important for organizations to take ownership of their privacy practices, especially when it comes to privacy notices and vendor relationships. Companies shouldn't depend on generic privacy notices or third-party templates that fail to reflect their actual data handling practices. Vendor contracts need equal attention, with privacy and cybersecurity provisions that mirror company commitments to consumers, since one vendor's mistake can create significant risk. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Lane Blumenfeld, Chief Legal Officer at Data Driven Holdings, about how general counsels can balance innovation with privacy and security. Lane explains how early legal involvement helps embed privacy and security into product design. He emphasizes that clear, accurate privacy notices and well-structured vendor contracts are essential for reducing privacy and security risks and maintaining accountability. And, as AI reshapes compliance obligations, Lane highlights the need for defined ownership across legal, product, and vendor teams and why companies sometimes need to walk away from vendors that expose them to excessive risk.

10-30
31:17

Accelerating AI Adoption Through AI Week

Summer Crenshaw is the Co-Founder and CEO of the Enterprise Technology Association (ETA), the national leader in AI and emerging technology adoption. She serves on multiple advisory boards and champions innovation, education, and responsible technology adoption. A seasoned tech entrepreneur and strategist, she previously co-founded Tilr, an AI-powered job marketplace recognized by CNBC, Forbes, and VentureBeat. Summer has been featured in major outlets and spoken on national stages, including DisruptHR and Dreamforce. In this episode… Business leaders across industries are responding to AI with a mix of excitement, fear, and uncertainty. Many want to use AI tools to accelerate business goals, yet they also worry about the risks and how these tools could disrupt jobs and existing roles. To move forward, companies need to focus on continuous learning that helps people understand and apply AI responsibly. So how can companies close the skill gaps that limit progress while ensuring their teams continue learning as AI evolves? Accelerating responsible AI adoption starts with education that connects people, communities, and industries. Organizations like the Enterprise Technology Association are helping bridge that gap through AI Week, a fast-moving initiative that brings together local leaders, educators, and companies to share insights for responsible AI adoption. These community-driven gatherings are designed around the industries and priorities of each city, creating programming that makes AI accessible to both technical and non-technical audiences. For companies to succeed, they also need to rethink how they approach governance. Rather than viewing it as a brake that hinders progress, it should serve as a steering wheel that guides teams with implementation and helps them achieve their goals.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Summer Crenshaw, Co-founder and CEO of the Enterprise Technology Association (ETA), about how businesses can accelerate responsible AI adoption through education and collaboration. Summer shares how AI Week launched in just five weeks and scaled across multiple cities by empowering local leaders and creating accessible AI programming. She explains why governance should enable rather than hinder AI implementation and what separates the 5% of successful AI projects from those that fail. Summer also discusses how to prepare for AI in 2026, addressing the shift from theory to measuring human impact.

10-23
30:51

How AI Is Transforming the General Counsel Role

Eric Greenberg is the Executive Vice President, General Counsel, and Corporate Secretary of Cox Media Group, a multi-platform media company based in Atlanta that serves major US media markets. CMG is a portfolio company of the private equity firm, Apollo Global Management. In this episode… AI is transforming how general counsels and legal teams approach their work, with efficiency being just the beginning. For general counsels, the real opportunity lies in using technology to strengthen strategic thinking and decision making, not replace it. Large language models enable lawyers to analyze complex issues and identify patterns across vast amounts of information, yet they still need to apply critical thinking to interpret the results. So, how can legal professionals leverage AI to elevate their roles without compromising the judgment that defines their value?  Legal professionals should approach AI as a strategic collaborator rather than a simple efficiency tool. Prompt engineering is emerging as a critical skill that bridges tech-savvy younger lawyers with seasoned attorneys who bring deep judgment and experience. Together, they can build more collaborative, strategic teams. Inside companies, AI is changing how legal departments and outside counsel work together by enhancing efficiency and fostering opportunities for shared learning across systems. Embedding institutional knowledge into AI systems offers benefits for consistency and strategic alignment, yet it also carries risk if general counsel and legal teams rely too heavily on its static outputs instead of applying their own judgment. And as AI evolves, organizations need to also prepare for fast-moving threats like deepfakes, building plans that allow them to respond within minutes, not days.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Eric Greenberg, Executive Vice President, General Counsel, and Corporate Secretary of Cox Media Group, about how general counsels can effectively use AI. Eric discusses how AI tools are reshaping due diligence and decision-making, why developing strong prompt engineering skills can strengthen collaboration between junior and senior lawyers, and how in-house and outside counsel can work more effectively through interoperable AI systems. He shares insights from his Bloomberg Law article series on AI's impact, emphasizing the importance of continuous learning and staying open-minded as technology evolves. Eric also explains the benefits and risks of embedding institutional knowledge into AI systems and offers practical ways legal professionals can experiment with AI tools.

10-16
38:24

Why Security Awareness Training Matters

Dan Thornton is the Co-founder and CEO of Goldphish. He is a former Royal Marine Commando who channeled his operational expertise into cybersecurity. Today, Dan leads a security awareness training company, helping organizations turn their people into their strongest defense with over 2.1 million learners trained worldwide. In this episode… Threat actors don't just target large corporations. Small and medium-sized businesses (SMBs) are finding themselves in the crosshairs of attackers who use automation, AI, and social engineering to cast a wide net of cyber threats. From convincing phishing scams that capture credentials to AI deepfakes that mimic trusted voices, the methods used to manipulate and exploit unsuspecting employees are becoming more sophisticated. So how can organizations protect themselves when even the most vigilant staff can be fooled? Organizations that believe they are too small to be targeted by threat actors often learn the hard way that one single mistake can have devastating consequences. Yet improving cybersecurity posture and building awareness doesn't have to be overwhelming or costly. SMBs can take simple steps, such as enabling multifactor authentication (MFA) for all business accounts, updating software and systems, and maintaining regular backups. Security training is also critical because it helps employees recognize threats and avoid mistakes that often lead to incidents. By combining basic security measures with security awareness training, businesses can foster a culture that strengthens their defenses against cyber threats. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Dan Thornton, Co-founder and CEO of Goldphish, about how small and medium-sized businesses can enhance their cybersecurity defenses. Dan emphasizes that attackers do not discriminate based on company size and that common blind spots, such as over-relying on technology, neglecting incident planning, and staying silent after mistakes, can leave organizations vulnerable. He explains why steps like enabling multifactor authentication, performing regular backups, and conducting employee security training make a big difference in reducing risk. Dan also shares insights on how companies can counter the growing threat of AI deepfakes and why business email compromise (BEC) remains one of the most effective scams.

10-09
33:06

GPC and UOOMS: Do Consumers Want an On/Off Switch or a Dimmer?

Andy Hepburn is the Co-founder of NEOLAW LLC and General Counsel at SafeGuard Privacy. He is a privacy lawyer with deep experience helping clients in the digital advertising industry navigate complex privacy laws. In this episode… Global Privacy Control (GPC) is transforming the way companies approach consumer consent. The rise of state privacy laws has fueled an explosion of cookie consent banners and other consent mechanisms that tend to confuse consumers about what they're agreeing to. GPC, also known as a universal opt-out mechanism, offers a simpler alternative by allowing consumers to set their privacy permissions once for electronic tracking at the browser level. Yet, its current all-or-nothing design raises the question: Does a single switch reflect what consumers really want?  Some consumers want to block all digital tracking, while others are open to targeted ads in specific situations, like shopping for a car or clothing. Most consumers fall somewhere in between. Earlier attempts, like the Do Not Track initiative, received pushback from the advertising industry, which argued that a simple on/off switch was too limited in capturing the diversity of consumer privacy preferences. A more nuanced approach would let individuals accept targeted ads in some areas while blocking them in others. Industry standards, such as the Interactive Advertising Bureau's Global Privacy Platform and the Multi-State Privacy Agreement, are designed to help companies ensure that consumer privacy preferences are consistently applied across publishers, advertisers, and the numerous intermediaries in the ad ecosystem. As consumer pressure and regulatory enforcement actions intensify, this may accelerate the adoption of these standards across various industries.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Andy Hepburn, Co-founder of NEOLAW LLC and General Counsel at SafeGuard Privacy, about whether universal opt-out mechanisms meet the needs of today's consumers. Andy explains why a single opt-out switch falls short of consumer needs and what more flexible models could enable. He highlights how industry standards can help companies and their vendors transmit privacy preferences across the ad ecosystem and why adoption will depend on consumer pressure and regulatory enforcement actions. Andy also explores the challenges smaller companies face in meeting privacy compliance requirements and how cooperation among regulators could shape the next phase of privacy enforcement.

10-02
37:40

Navigating the New Rules of Healthcare Advertising

Jeremy Mittler is the Co-founder and CEO of Blueprint Audiences. With nearly two decades in healthcare, advertising, and privacy, Jeremy has shaped how marketers reach patients and providers. At Blueprint, he is creating a new, privacy-safe way to build health audiences that ensures compliance across HIPAA and state privacy laws. In this episode… Healthcare marketers face mounting pressure to deliver personalized ads while ensuring compliance across the Health Insurance Portability and Accountability Act (HIPAA) and the growing list of state privacy laws, where gray areas around sensitive and consumer health information make compliance especially complex. Marketers who rely on broad targeting and legacy ad tech tools are finding that old methods no longer meet legal requirements. So, how can companies target health audiences in a way that is effective and aligns with privacy obligations?  Rather than treating privacy as a trade-off with precision, healthcare marketers can start by building a privacy-safe experience for consumers who see their ads, and optimizing for business goals from there. Proven methods, such as contextual advertising and using opted-in consented data and aggregated insights on personal information, ensure effective and privacy-forward campaigns. Yet these methods alone are not enough. Marketers and companies alike need to perform due diligence on their vendors and third-party ad tech platforms, especially as AI introduces new risks. Marketers can take simple steps, such as testing consumer opt-outs and exercising their privacy rights on vendor sites, to ensure the technology works as intended.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Jeremy Mittler, Co-founder and CEO of Blueprint Audiences, about how companies can create privacy-safe healthcare audience segments. Jeremy explains why relying solely on HIPAA is no longer sufficient in meeting compliance obligations and outlines challenges companies face while navigating the patchwork requirements of evolving state privacy laws. He details practical methods that allow marketers to reach the right audiences without compromising privacy and describes why vendor due diligence must go beyond checklists, urging marketers to test vendor ad tech platforms and to think like consumers when assessing ad experiences. Jeremy also discusses how AI complicates the boundary between aggregated and personal data and how emerging regulatory trends are reshaping healthcare advertising.

09-25
26:40

How Companies Can Prevent Identity-Based Attacks

Jasson Casey is the CEO and Co-founder of Beyond Identity, the first and only identity security platform built to make identity-based attacks impossible. With over 20 years of experience in security and networking, Jasson has built enterprise solutions that protect global organizations from credential-based threats. In this episode… Identity system attacks are on the rise and continue to be a top source of security incidents. Threat actors are using AI deepfakes, stealing user credentials, and taking advantage of weaknesses in the devices people use to connect to company systems. As threat actors become more sophisticated, companies need to find new ways to prevent these incidents rather than just detecting and responding to them. So, what can companies do differently to protect their data and systems?  Most authentication methods still rely on shared credentials like passwords or codes that travel across systems. Any data that moves can be intercepted and stolen by malicious actors. That's why companies like Beyond Identity are helping businesses strengthen their security posture with a platform that eliminates shared credentials by replacing them with device-bound, hardware-backed cryptography. By leveraging the same secure enclave technology used in mobile payment systems, the platform produces unique, unforgeable signatures tied to each user and device. This approach prevents AI impersonation attacks, phishing, and credential theft, whether users are on company devices or using Bring Your Own Devices (BYOD). In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels sit down with Jasson Casey, CEO and Co-founder of Beyond Identity, to discuss how businesses can prevent identity-based attacks. Jasson explains why chasing AI deepfake detection is less effective than verifying the user and device behind communications. He also shares how Beyond Identity's platform works with existing identity software, enables secure authentication, and provides companies with certainty about user access, including the device used and the conditions under which users log in. Additionally, Jasson highlights how cryptographic signing tools can verify the authenticity of emails, meetings, and other content, helping businesses defend against AI deepfakes.

09-18
27:46

New CCPA Rules: What Businesses Need to Know

Daniel M. Goldberg is the Partner and Chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit Klein & Selz PC. He advises on a wide range of privacy, security, and AI matters. His expertise spans from handling high-stakes regulatory enforcement actions to shaping the application of privacy and AI laws. Earlier this year, the California Privacy Lawyers Association named him the "California Privacy Lawyer of the Year." In this episode… California is reshaping privacy compliance with its latest updates to the California Consumer Privacy Act (CCPA). These sweeping changes introduce new obligations for businesses operating in California, notably in the areas of Automated Decision-Making Technology (ADMT), cybersecurity audits, and risk assessments. So, what can companies do now to get ahead?  Companies can prepare by understanding the scope of the new rules and whether or not they apply to their business, as the regulations are set to take effect on October 1, 2025, if they are filed with the Secretary of State by August 31. If that filing happens later, the next effective date will shift to January 1, 2026. The rules around ADMT are especially complex, with broad definitions that could apply to any tool or system that processes personal data to make significant decisions about consumers. Beyond ADMT, certain companies will also need to conduct comprehensive cybersecurity audits through an independent auditor, a process that may be challenging for smaller organizations. Risk assessments impose an additional obligation by requiring reviews of activities such as processing, selling, or sharing sensitive data, and using ADMT for significant decision-making, among others, with attestations submitted to regulators. The new rules make it clear that California regulators also expect companies to maintain detailed documentation and demonstrate accountability through governance. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Daniel Goldberg, Partner and Chair of the Data Strategy, Privacy & Security Group at Frankfurt Kurnit Klein & Selz PC, about how companies can navigate the CCPA's new requirements. From ADMT to mandatory cybersecurity audits and risk assessments, Daniel provides a detailed overview of the complex requirements, explaining the scope and its impact on companies. He also outlines how these new rules set the tone for future privacy and AI regulations, why documentation and governance are central to compliance, and shares practical tips on the importance of reviewing AI tool settings to ensure sensitive data and confidential information are not used for AI model training.

09-04
32:01

How AI Is Rewriting the Rules of Cybersecurity

John Graves is an innovative legal leader and Senior Counsel at Nisos Holdings, Inc. He has a diverse legal background at the intersection of law, highly regulated industry, and technology. John has over two decades of legal experience advising business leaders, global privacy teams, CISOs and security teams, product groups, and compliance functions. He is a graduate of the University of Oklahoma. In this episode… AI is fundamentally changing the cybersecurity landscape. Threat actors are using AI to move faster, scale attacks, and create synthetic identities that are difficult for companies to detect. At the same time, defenders rely on AI to sift through large amounts of data and separate the signal from noise to determine whether usernames and email addresses are tied to legitimate users or malicious actors. As businesses rush to adopt AI, how can they do so without creating gaps that leave them vulnerable to risks and cyber threats?  To stay ahead of evolving cyber risks, organizations should conduct tabletop exercises with security and technical teams. These exercises help business leaders understand risks like prompt injection, poisoned data, and social engineering by walking through how AI systems operate and asking what would happen if certain situations occurred. They are most effective when conducted early in the AI lifecycle, giving companies the chance to simulate attack scenarios and identify risks before systems are deployed. Companies also need to establish AI governance because, without oversight of inputs, processes, and outputs, AI adoption carries significant risk.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with John Graves, Senior Counsel at Nisos Holdings, Inc., about how AI is reshaping cyber threats and defenses. John shares how threat actors leverage AI to scale ransomware, impersonate real people, and improve social engineering tactics, while defenders use the technology to analyze data and uncover hidden risks. He explains why public digital footprints of executives and their families are becoming prime targets for attackers and why companies must take human risk management seriously. John also highlights why establishing governance and conducting tabletop exercises are essential for identifying vulnerabilities and preparing leaders to respond to real-world challenges.

08-28
27:34

The Blueprint for a Global Privacy and Security Program

Robert S. Jett III ("Bob") serves as the first Global Chief Data Privacy Officer at Bunge, where he leads global privacy initiatives and supports key projects in digital transformation, AI, and data management. With over 30 years of legal and in-house counsel experience across manufacturing, insurance, and financial services, he has built and managed global programs for compliance, data privacy, and incident response. Bob has worked extensively across IT, cybersecurity, information security, and corporate compliance teams. He holds a BA in international relations and political science from Hobart College and a JD from the University of Baltimore School of Law. Bob is active in the ACC, IAPP, Georgia Bar Privacy & Law Section, and the Maryland State Bar Association. In this episode… Managing privacy and security across multiple jurisdictions has never been more challenging for global companies, as regulations evolve and privacy, security, and AI risks accelerate at the same time. The challenge becomes particularly acute for businesses managing supply chains that span dozens of countries, where they must navigate geopolitical shifts and comply with strict employee data regulations that differ by region. These organizations also face the added complexity of governing AI tools to protect sensitive data. Navigating these challenges requires close coordination between privacy, security, and operational teams so risks can be identified quickly and addressed in real time.  A simple way global companies can address these challenges is by embedding privacy leaders into operational teams. For global companies, like Bunge, regular communication between privacy, IT, and cybersecurity teams keeps threats visible in real time, while cross-collaboration helps identify vulnerabilities and mitigate weak points. The company also incorporates environmental, social, and governance (ESG) principles into its privacy framework, using traceability to validate supply chain data and meet regulatory requirements. When it comes to managing emerging technologies like AI, foundational privacy principles apply. Companies need to establish governance for data quality, prompt management, third-party vendors, and automated tools, such as AI notetakers. These steps build transparency, reduce risk, and strengthen trust across the organization.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Robert "Bob" Jett, Global Chief Data Privacy Officer at Bunge, about building and leading a global privacy program. Bob emphasizes the importance of embedding privacy leadership into operational teams, like IT departments, to enable collaboration and build trust. He discusses strategies for adhering to ESG principles, managing global employee data privacy, and applying privacy fundamentals to AI governance. Bob also provides tips for responsible AI use, including the importance of prompt engineering oversight, and explains why relationship-building and transparency are essential for effective global privacy and security programs.

08-21
30:31

Navigating Privacy Compliance When AI Changes Everything

Mason Clutter is a Partner and Privacy Lead at Frost Brown Todd Attorneys, previously serving as Chief Privacy Officer for the US Department of Homeland Security. Mason's practice is at the intersection of privacy, security, and technology. She works with clients to operationalize privacy and security, helping them achieve their goals and build and maintain trust with their clients. In this episode… Companies are facing new challenges trying to build privacy programs that keep up with evolving privacy laws and new AI tools. Laws, like Maryland's new privacy law, are adding pressure with strict data minimization requirements and expanded protections for sensitive and children's data. These shifts are driving companies to reconsider how and when privacy is built into operations. So, how can companies effectively design privacy programs that address regulatory, operational, and AI-driven risks?  Companies can start by embedding privacy and security measures into their products and services from the start. AI adds another layer of complexity. While organizations are trying to use AI for efficiency, confidential or personal information is often entered into AI tools without knowing how it will be used or where it will go. Vague third-party vendor contract terms and downstream data sharing compound the risk. Staying compliant means understanding each AI use case, reviewing vendor contracts closely, and choosing AI tools that reflect a company's risk tolerance and privacy and security practices. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels chat with Mason Clutter, Partner and Privacy Lead at Frost Brown Todd Attorneys, about how companies can navigate complex privacy, security, and AI challenges. Mason shares practical insights on navigating Maryland's new privacy law, managing vendor contracts, and downstream AI risks. She explores common privacy misconceptions, including why privacy should not be one-size-fits-all or checkbox compliance exercise. Mason also addresses growing concerns around AI deepfakes and why regulation alone is not enough without broader public education.

08-14
36:14

How Privacy is Reshaping the Ad Tech Industry

Allison Schiff is the Managing Editor at AdExchanger, where she covers mobile, Meta, measurement, privacy, and the app economy. Allison received her MA in journalism from the Dublin Institute of Technology in Ireland (her favorite place) and a BA in history and English from Brandeis University in Waltham, Mass. In this episode… Ad tech companies are under increasing pressure to evolve their privacy practices. What was once considered a "wild west," loosely regulated environment, is now being reshaped by regulatory enforcement actions and shifting consumer expectations. Many companies are becoming more selective about their vendors, implementing privacy by design, and embracing data minimization practices after years of unchecked data collection. While at the same time, many ad tech companies are rushing to position themselves as AI companies, often without a clear understanding of the risks and how these claims align with consumer trust. To meet rising regulatory and consumer expectations, some ad tech companies are taking concrete steps to improve their privacy posture. This includes auditing third-party tools, removing unnecessary tracking pixels from websites, and gaining more visibility into how data flows through partner systems. On the AI front, research shows that consumer trust drops when AI-generated content is not clearly labeled and that marketing products as AI-powered makes them less appealing. These findings point to the need for greater transparency in company data collection practices and marketing and AI transparency.  In this episode of the She Said Privacy/He Said Security podcast, Jodi and Justin Daniels speak with Allison Schiff, managing editor at AdExchanger, about how ad tech companies are adapting to regulatory scrutiny and evolving consumer privacy expectations. Allison shares how the ad tech industry's approach to privacy is maturing, and explains how companies are implementing privacy by design, reassessing vendor relationships, and using consent tools more intentionally. She offers insight into how journalists utilize AI while maintaining editorial judgment and presents concerns about AI's impact on critical thinking. Allison also describes the disconnect between AI marketing hype and consumer preferences, and the need for companies to disclose the use of AI-generated content to maintain trust.

08-07
37:44

How to Build a Global Privacy Program That Enables Growth

Heather Kuhn is Privacy, Security, and Technology Counsel at Genuine Parts Company. She is a privacy and technology attorney with nearly two decades of professional cross-industry experience. She teaches at Georgia State College of Law, serves on the Georgia Bar's AI Committee, and formerly chaired its Privacy & Technology Section, leading conversations at the intersection of law, AI, and innovation. In this episode… Embedding privacy and security practices into a large, global business requires more than policies. It takes early collaboration, constant relationship building across teams, and a deep understanding of business goals. Privacy programs are most effective when they build consumer trust, increase operational efficiency, meet privacy requirements, and support strategic business goals, like revenue growth and product development. And as companies continue to adopt AI, the same principles apply to managing AI risk. Teams need to evaluate how data is used, assess risks, and adapt existing privacy and security measures to new technologies. Managing privacy across a massive global company requires building the right partnerships and embedding privacy-by-design principles from the start of projects. Most companies have small but mighty privacy teams, so the key is finding privacy champions across the business to handle operational functions while the privacy team sets global policies and procedures. Data mapping and privacy impact assessments are critical tools that help identify risks and right-size privacy programs. This also extends to the customer experience, where meaningful consent, clear privacy notices, and giving users control strengthens trust. Privacy training is also essential for internal teams and works best when it's interactive and relevant to an employee's daily work rather than abstract compliance requirements.  In this episode of She Said Privacy/He Said Security, Jodi Daniels and Justin Daniels speak with Heather Kuhn, Privacy and Technology Counsel at Genuine Parts Company, about operationalizing privacy and security across a global enterprise. Heather explains how early engagement, strong internal relationships, and cross-functional collaboration make it possible to scale privacy programs without slowing the business. She shares how her team uses data mapping and privacy impact assessments to right-size privacy programs and privacy requirements and emphasizes the need to embed privacy into customer experiences through clear privacy notices and meaningful consent. Heather also highlights the importance of privacy training tied to employee roles, delivered through in-person sessions and gamified content. And she explains how her department uses generative AI to enhance legal team efficiency, and how she approaches privacy risks associated with AI tools and automation.

07-31
22:31

Helping Seniors Avoid Digital Scams, One Click at a Time

Alexandria "Lexi" Lutz is a privacy attorney and the Founder of Opt-Inspire, Inc., a nonprofit dedicated to helping seniors and youth build digital confidence and avoid online scams. By day, she serves as Senior Corporate Counsel at Nordstrom, advising on privacy, cybersecurity, and AI across the retail and technology landscape. In this episode… Online scams are becoming more sophisticated, targeting older adults with devastating financial consequences that often reach tens of thousands of dollars with little recourse. From tech support fraud to AI-driven deepfakes that mimic loved ones' voices, these scams prey on isolation, fear, and digital inexperience. Many families struggle to protect their aging parents and grandparents, especially when conversations about digital risks are met with resistance from loved ones. How can we bridge the digital literacy gap across generations and empower seniors to navigate these evolving threats? The urgency is real. In 2024, seniors lost nearly $5 billion to scams, a 43 percent increase from the previous year. Scammers are using voice cloning, fake emergencies, and fear-based messaging to pressure people into giving up money or sensitive personal information. Education can be a powerful defense, and that's why Opt-Inspire delivers engaging, volunteer-led workshops tailored to senior living communities, teaching practical skills like recognizing fake emails and enabling two-factor authentication. Protecting aging loved ones against technology and AI-driven scams requires proactive and hands-on education. Opt-Inspire equips seniors with the tools and knowledge to stay safe online through engaging, community-based seminars. The nonprofit delivers in-person and volunteer-led workshops tailored to senior living communities, addressing both technical literacy and emotional manipulation tactics. Through scripts, visuals, and a "Make It Personal" toolkit with conversation starters, Opt-Inspire also equips families with resources to discuss digital safety with loved ones in a constructive and relatable way.  In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels talk with Alexandria (Lexi) Lutz, Senior Corporate Counsel at Nordstrom and Founder of Opt-Inspire, about building digital confidence among seniors. Lexi shares how a personal family experience inspired her to launch a nonprofit focused on preventing elder fraud. She delves into the most common scams targeting older adults today, including government impersonation, romance cons, and AI-generated deepfakes. Lexi emphasizes the importance of proactive education, enabling two-factor authentication, and weekly family check-ins. She also offers practical advice and resources for privacy professionals and family members alike who want to make a positive impact.

07-24
40:14

Real AI Risks No One Wants To Talk About And What Companies Can Do About Them

Anne Bradley is the Chief Customer Officer at Luminos. Anne helps in-house legal, tech, and data science teams use the Luminos platform to manage the automated AI risk, compliance, and approval processes, statistical testing, and legal documentation. Anne also serves on the Board of Directors of the Future of Privacy Forum, a nonprofit that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. In this episode… AI is being integrated into everyday business functions, from diagnosing cancer to translating conversations and powering customer service chatbots and autonomous vehicles. While these tools deliver value, they also bring privacy, security, and ethical risks. As organizations dive into adopting AI tools, they often do so before performing risk assessments, establishing governance, and implementing privacy and security guardrails. Without safeguards and internal processes in place, companies may not fully understand how the tools function, what data they collect, or the risk they carry. So, how can companies efficiently assess and manage AI risk as they rush to deploy new tools?  Managing AI risk requires governance and the ability to test AI tools before deploying them. That's why companies like Luminos provide a platform to help companies manage and automate the AI risk compliance approval processes, model testing, and legal documentation. This platform allows teams to check for toxicity, hallucinations, and AI bias even when an organization uses high-risk tools like customer-facing chatbots. Embedding practical controls, like pre-deployment testing and assessing vendor risk early, can also help organizations implement AI tools safely and ethically. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Anne Bradley, Chief Customer Officer at Luminos, about how companies can assess and mitigate AI risk. Anne explains the impact of deepfakes on public trust and the need for a regulatory framework to reduce harm. She shares why AI governance, AI use-case risk assessments, and statistical tools are essential for helping companies monitor outputs, reduce unintended consequences, and make informed decisions about high-risk AI deployments. Anne also highlights why it's important for legal and compliance teams to understand business objectives driving an AI tool request before evaluating its risk.

07-17
36:50

Privacy in the Loop: Why Human Training Is AI's Greatest Weakness and Strength

Nick Oldham is the Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax Inc. A forward-thinking legal and operations executive, Nick has a proven track record of driving large-scale transformations by integrating legal expertise with strategic operational leadership. He oversees all enterprise-wide second-line functions, leading initiatives to embed AI, enable data-driven decision-making, and deliver innovative, compliant solutions across a $1.9B business unit. His focus is on building efficient, scalable systems that align with both compliance standards and long-term strategic goals. In this episode… Many companies are rushing to adopt AI tools without adequately training their workforce on how to use them responsibly. As AI becomes embedded in daily business operations, the biggest risk isn't the technology itself, but the lack of human understanding around how AI works and what it can do. When teams struggle to understand the differences between machine learning and generative AI, it creates risks and makes it harder to establish appropriate privacy and security guardrails. Human training is AI's greatest weakness and strength, and closing that gap involves rethinking how companies educate and train employees at every level.  The responsible use of AI depends on human judgment. Companies need to embed privacy education, critical thinking, and AI risk awareness into training programs from the start. Employees should be taught how to ask questions, evaluate model behavior, and recognize when personal information is being misused. AI literacy should also extend beyond the workplace. Introducing it in high school or even earlier helps prepare future professionals to navigate complex AI tools and make thoughtful, responsible decisions. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Nick Oldham, Chief Operations Officer, USIS, and Global Chief Risk, Privacy and Compliance Officer at Equifax, about the role of human training in AI literacy. Nick breaks down the components of AI literacy, explains why everyone needs a foundational understanding, and emphasizes the importance of prioritizing privacy awareness when using AI tools. He also highlights ways to embed privacy and security into AI governance programs and provides actionable steps organizations can take to strengthen AI literacy across teams.

07-10
28:22

Where Strategy Meets Reality in AI Governance

Andrew Clearwater is a Partner at Dentons' Privacy and Cybersecurity Team and a recognized authority in privacy and AI governance. Formerly a founding leader at OneTrust, he oversaw privacy and AI initiatives, contributed to key data protection standards, and holds over 20 patents. Andrew advises businesses on responsible tech implementation, helping navigate global regulations in AI, data privacy, and cybersecurity. A frequent speaker, he offers insight into emerging compliance challenges and ethical technology use. In this episode… Many companies are diving into AI without first putting governance in place. They often move forward without defined goals, leadership, or alignment across privacy, security, and legal teams. This leads to confusion about how AI is being used, what risks it creates, and how to manage those risks. Without coordination and structure, programs lose momentum, transactions are delayed, and expectations become harder to meet. So how can companies build a responsible AI governance program? Building effective AI governance programs starts with knowing what's in use, why it's in use, what data AI tools and systems collect, the risk it creates, and how to manage it. Standards like ISO 42001 and the NIST AI Risk Management Framework help companies guide this process. ISO 42001 offers the benefit of certification and supports cross-functional consistency, while NIST may be better suited for organizations already using it in related areas. Both frameworks help companies define the scope of AI use cases, understand the risks, and inform policies before jumping into controls. Conducting data inventories and utilizing existing risk management processes are also essential in identifying shadow AI introduced by employees or third-party vendors. In this episode of She Said Privacy/He Said Security, Jodi and Justin Daniels speak with Andrew Clearwater, Partner at Dentons, about how companies can build responsible AI governance programs. Andrew explains how standards and legal frameworks support consistent AI governance implementation and how to encourage alignment between privacy, security, legal, and ethics teams. He also outlines the importance of monitoring shadow AI across third-party vendors and practical steps companies can take to effectively structure their AI governance programs.

07-03
29:22

Recommend Channels