Discover
The Tech Savvy Lawyer
The Tech Savvy Lawyer
Author: Michael D.J. Eisenberg
Subscribed: 6Played: 22Subscribe
Share
© ℗ 2020 Michael D.J. Eisenberg
Description
The Tech Savvy Lawyer interviews Judges, Lawyers, and other professionals discussing utilizing technology in the practice of law. It may springboard an idea and help you in your own pursuit of the business we call "practicing law". Please join us for interesting conversations enjoyable at any tech skill level!
156 Episodes
Reverse
My next guest is Nikki Mehrpoo. She is a nationally recognized leader in AI governance for law practices, known for her practical, ethical, and innovation-focused strategies. Today, she details her Triple-E Protocol and shares key steps for safely leveraging AI in legal work. Join Nikki Mehrpoo and me as we discuss the following three questions and more! Based on your pioneering work with "Govern Before You Automate," what are the top three foundational steps every lawyer should take to implement AI responsibly, and what are the top three mistakes lawyers make with AI? What are your top three tips or tricks when using AI in your work? When assessing the next AI platform from a service provider, what are the top three questions lawyers should be asking? In our conversation, we cover the following: 00:00:00 – Welcome and guest's background 🌟 00:01:00 – Current tech setup and cloud-based workflows ☁️ 00:02:00 – Privacy and IP management, not client confidentiality 🔐 00:03:00 – Document deduplication with Effingo 📄 00:04:00 – Hardware: HP Omni Book 7 Laptop, HP monitors, iPhone 💻📱 00:05:00 – Efficiency tools: Text Expander, personal workflow shortcuts ⌨️ 00:06:00 – Balancing technology innovation and risk management ⚖️ 00:07:00 – Adapting to change, ongoing legal tech education 🧑💻 00:08:00 – Triple-E Framework: Educate, Empower, Elevate 🚀 00:09:00 – Governance, supervision duties, policy setting 🛡️ 00:10:00 – Human verification as a standard for all legal AI output 🧑⚖️ 00:12:00 – Real-world examples: AI hallucinations, bias, and due diligence ⚠️ 00:13:00 – IT vs. AI expertise, communicating across teams 🛠️ 00:14:00 – Chief AI Governance Officer, governance in legal innovation 🏛️ 00:15:00 – Global compliance, EU AI Act, international standards 🌐 00:16:00 – Hidden AI in legacy software, policy gaps 🔎 00:17:00 – Education as continuous legal responsibility 📚 00:18:00 – Better results through prompt engineering 🔤 00:19:00 – Verify, verify, verify: never trust without review ✔️ 00:20:00 – ABA Formal Opinion 512: standards for responsible legal AI 📜 00:21:00 – Nikki's Triple-E Protocol, governance best practices 📊 00:22:00 – Data origin, bias, and auditability in legal AI systems 🧩 00:23:00 – Frameworks for "govern before you automate" in legal workflows 🔒 00:24:00 – Importance of internal hosting and zero retention policies 🏢 00:25:00 – Maintaining confidentiality with third-party AI and HIPAA compliance 🤫 00:26:00 – Where to find Nikki and connect 🌐 Resources Connect with Nikki Mehrpoo Email: Nikki@MedLegalProfessor.AI Website: https://governbeforeyouautomate.ai Secondary Site: https://igoverai.com LinkedIn: https://linkedin.com/in/nikkimehrpoo Mentioned in the episode ABA Formal Opinion 512: https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/ EU AI Act: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai GoHighLevel: https://blog.gohighlevel.com/highlevel-101-everything-you-need-to-know/ Triple-E AI Governance Protocol: https://igoverai.com Hardware mentioned in the conversation HP External Monitors: https://www.hp.com/us-en/shop/vp/displays HP Omni Book 7 Laptop: https://www.hp.com/us-en/shop/vp/Hp_Omnibook_7_Series iPhone: https://www.apple.com/iphone/ Software & Cloud Services mentioned in the conversation GoHighLevel: https://gohighlevel.com Gamma: https://gamma.app Effingo: https://effingotech.com/ Text Expander: https://textexpander.com Pixart: https://pixart.ai
Laura Moorer, the Law Librarian for the DC Court of Appeals, brings over two decades of experience in legal research, including nearly 14 years with the Public Defender Service for DC. In this conversation, Laura shares her top three tips for crafting precise prompts when using generative AI, emphasizing clarity, specificity, and structure. She also offers insights on how traditional legal research methods—like those used with LexisNexis and Westlaw—can enhance AI-driven inquiries. Finally, Laura offers practical strategies for utilizing generative AI to help lawyers identify and locate physical legal resources, thereby bridging the gap between digital tools and tangible materials. Join Laura and me as we discuss the following three questions and more! What are your top three tips when it comes to precise prompt engineering, your generative AI inquiries? What are your top three tips or tricks when it comes to using old-school prompts like those from the days of LexisNexis and Westlaw in your generative AI inquiries? What are your top three tips or tricks using generative AI to help lawyers pinpoint actual physical resources? In our conversation, we cover the following: [00:40] Laura's Current Tech Setup [03:27] Top 3 Tips for Crafting Precise Prompts in Generative AI [13:44] Bringing Old-School Legal Research Tactics to Generative AI Prompting [20:42] Using Generative AI to Help Lawyers Locate Physical Legal Resources [24:38] Contact Information Resources: Connect with Laura: LinkedIn: linkedin.com/in/laura-moorer Email: lmoorer@dcappeals.gov Instagram: instagram.com/dccalibby/ Mentioned in the episode: Libby Visits the DC Courts 🔗 https://perma.cc/HXA9-NR8B is an archive of http://intranet.gov/attachment/782039870000/1800539/Libby%20visits%20the%20DC%20Courts%20Story%20BookRS.pdf?type=1 from Wednesday 02, July 2025 Software & Cloud Services mentioned in the conversation: AT&T Internet: att.com/internet/ Bloomberg Law: pro.bloomberglaw.com/ Fastcase: fastcase.com/ Internet Archive: archive.org/ LexisNexis: lexisnexis.com/en-us Spectrum Internet: spectrum.com/internet vLex: vlex.com/ Westlaw: legal.thomsonreuters.com/en/westlaw WorldCat: worldcat.org/
In this episode, Chris Dralla, co-founder & CEO of TypeLaw, shares his journey from the legal profession to developing the AI-powered platform that assists with court brief formatting and compliance. He discusses the platform's features, including citation checking, table of contents creation, and dynamic editing. Chris explains how TypeLaw helps save lawyers 20-40 hours per brief by handling technical aspects and ensuring compliance with local court rules. This allows lawyers to focus on more meaning tasks. He also emphasizes the platform's security measures, compatibility with multiple document formats, and adaptability to evolving legal requirements. Additionally, Chris touches on customer support and the future of digital legal documentation. Join Chris and me as we discuss the following three questions and more! What are the top three key benefits of using TypeLaw's brief editing features, and how does it compare to traditional word processing software for legal documents? What are the top three security concerns lawyers should consider when using a product like TypeLaw? How does TypeLaw ensure compliance with local court rules across different jurisdictions, and how frequently are these rules updated in the system? In our conversation, we cover the following: [00:43] Chris's Current Tach Setup [11:38] Key Benefits of TypeLaw's Brief Editing vs. Traditional Word Processors [16:27] How TypeLaw Boosts Efficiency for Lawyers [23:08] Top 3 Security Concerns for Lawyers Using [37:15] How TypeLaw Stays Compliant with Local Court Rules [42:40] Contact Information Resources: Connect with Chris: LinkedIn: linkedin.com/in/cdralla/ Website: typelaw.com/ Software & Cloud Services mentioned in the conversation: TypeLaw: typelaw.com/ WordPerfect: wordperfect.com
In this TSL Labs bonus episode, we examine how the Department of Veterans Affairs is leading a historic transformation from traditional compliance frameworks to a dynamic, AI-driven approach called "cyber dominance." This conversation unpacks what this seismic shift means for legal professionals across all practice areas—from procurement and contract law to privacy, FOIA, and litigation. Whether you're advising government agencies, representing contractors, or handling cases where data security matters, this discussion provides essential insights into how continuous monitoring, zero trust architecture, and AI-driven threat detection are redefining professional competence under ABA Model Rule 1.1. 💻⚖️🤖 Join our AI hosts and me as we discuss the following three questions and more! How has federal cybersecurity evolved from the compliance era to the cyber dominance paradigm? 🔒 What are the three technical pillars—continuous monitoring, zero trust architecture, and AI-driven detection—and how do they interconnect? 🛡️ What professional liability and ethical obligations do lawyers now face under ABA Model Rule 1.1 regarding technology competence? ⚖️ In our conversation, we cover the following: [00:00:00] - Introduction: TSL Labs Bonus Podcast on VA's AI Revolution 🎯 [00:01:00] - Introduction to Federal Cybersecurity: The End of the Compliance Era 📋 [00:02:00] - Legal Implications and Professional Liability Under ABA Model Rules ⚖️ [00:03:00] - From Compliance to Continuous Monitoring: Understanding the Static Security Model 🔄 [00:04:00] - The False Comfort of Compliance-Only Approaches 🚨 [00:05:00] - The Shift to Cyber Dominance: Three Integrated Technical Pillars 💪 [00:06:00] - Zero Trust Architecture (ZTA) Explained: Verify Everything, Trust Nothing 🔐 [00:07:00] - AI-Driven Detection and Legal Challenges: Professional Competence Under Model Rule 1.1 🤖 [00:08:00] - The New Legal Questions: Real-Time Risk vs. Static Compliance 📊 [00:09:00] - Evolving Compliance: From Paper Checks to Dynamic Evidence 📈 [00:10:00] - Cybersecurity as Operational Discipline: DevSecOps and Security by Design 🔧 [00:11:00] - Litigation Risks: Discovery, Red Teaming, and Continuous Monitoring Data ⚠️ [00:12:00] - Cyber Governance with AI: Algorithmic Bias and Explainability 🧠 [00:13:00] - Synthesis and Future Outlook: Law Must Lead, Not Chase Technology 🚀 [00:14:00] - The Ultimate Question: Is Your Advice Ready for Real-Time Risk Management? 💡 [00:15:00] - Conclusion and Resources 📚 Resources Mentioned in the Episode ABA Model Rule 1.1 - Competent Representation (including technology competence requirement) - https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/ Department of Veterans Affairs (VA) Cybersecurity Initiative - https://www.va.gov/oit/cybersecurity/ DevSecOps Pipelines - Security integration in software development - https://www.devsecops.org/ FedRAMP (Federal Risk and Authorization Management Program) - https://www.fedramp.gov/ FISMA (Federal Information Security Management Act) - https://www.cisa.gov/topics/cyber-threats-and-advisories/federal-information-security-modernization-act Google Notebook AI - AI discussion generation tool - https://notebooklm.google.com/ HIPAA (Health Insurance Portability and Accountability Act) - https://www.hhs.gov/hipaa/index.html NIST Cybersecurity Framework - https://www.nist.gov/cyberframework Red Teaming - Ethical hacking and security testing methodology - https://www.cisa.gov/red-team-assessments Zero Trust Architecture (ZTA) - Federal mandate for security verification - https://www.cisa.gov/zero-trust Software & Cloud Services Mentioned in the Conversation AI-Driven Detection Systems - Automated threat detection and response platforms Automated Compliance Platforms - Dynamic evidence generation systems Continuous Monitoring Systems - Real-time security assessment platforms DevSecOps Tools - Automated security testing in software development pipelines Firewalls - Network security hardware devices Google Notebook AI - https://notebooklm.google.com/ Penetration Testing Software - Security vulnerability assessment tools Zero Trust Architecture (ZTA) Solutions - Identity and access verification systems
My next guest is Nick Cohen, Chief Operating Officer of Matador Solutions — a legal marketing think tank and agency — and a newly minted partner at Cohen Injury Law Group. Nick brings a rare dual perspective: he lives the daily grind of running a law firm AND helps over 170 firms across the country use technology and marketing strategy to grow their practice. With more than $1 billion in case value generated for clients, Nick knows what separates the law firms that thrive from the ones that spin their wheels. 🚀 Whether you are just hanging out your shingle or you have been practicing for years and feel overwhelmed by the alphabet soup of SEO, GEO, PPC, and AI, this episode breaks it all down in plain language. Nick shares actionable steps — some of which cost nothing — to help your firm show up where your next great client is already looking. ⚖️ Join Nick Cohen and me as we discuss the following three questions and more! 🤔 What are the top three ways a small or mid-size law firm can leverage AI-driven search — like Google AI Overviews and ChatGPT — to reliably generate better cases, not just more clicks? 💡 For firms that feel overwhelmed by SEO, paid search, and social media, what are the top three pieces of marketing technology or automations they should implement first to turn their website into a true new case acquisition system? 🏆 Looking across $1 billion+ in case value generated for over 170 law firms, what are the top three technology habits the most successful firms share — and what are their less successful peers simply not doing? In our conversation, we cover the following: [0:00] 🎤 Introduction & five-star review shoutout [0:45] 👨💼 Nick's background: Matador Solutions, Cohen Injury Law Group, and tech stack overview (Jira, Google Suite, Claude, ChatGPT, WordPress, Slack) [1:30] 💻 Hardware setup: MacBook Pro M4, desktop, HDMI monitor — what Nick runs on daily [3:00] 📱 iPhone, planned obsolescence, and the Apple ecosystem slowdown conversation [4:00] ❓ Question 1: Leveraging AI-driven search (Google AI Overviews, ChatGPT) to get better cases — not just traffic [5:00] 🔍 GEO vs. SEO explained — what is Generative Engine Optimization and why it matters for your law firm right now [6:30] 📖 The difference: SEO = Google ranking; GEO = getting cited by ChatGPT, Claude, Perplexity, and Grok [8:00] 🤖 Schema markup, robots.txt, and opening your website to LLM crawlers — practical steps any firm can take [9:00] 📋 Attorney directory listings (Avvo, Super Lawyers, FindLaw) — are they worth the money in 2026? [10:30] ✍️ Tip #2: High-quality thought leadership content as a GEO and SEO powerhouse [11:30] ⭐ Tip #3: Reviews, reviews, reviews — the single highest-ROI, zero-cost activity for any law firm [12:00] 📲 The "one-click review link" strategy: why text beats email every time [13:00] 😬 How to handle negative reviews — call first, respond professionally, and why a 4.9 rating beats a perfect 5.0 [15:00] ❓ Question 2: Top three marketing tech tools/automations for overwhelmed firms — CallRail, case management software, and understanding your channels [17:30] ❓ Question 3: The technology habits that separate high-growth firms from stagnant ones — intake systems, engagement, and growth mindset [19:30] 🗺️ How Matador Solutions walks a brand-new firm from zero to a steady stream of cases — step by step [22:00] 📬 Where to find Nick Cohen Resources 🔗 Connect with Nick Cohen 📧 Email: nick@matadorsolutions.net 💼 LinkedIn: https://www.linkedin.com/in/nickecohen/ 🌐 Website: matadorsolutions.net 📚 Mentioned in the Episode (Non-Hardware / Non-Software) 🎙️ Apple Podcasts — podcasts.apple.com ⚖️ Matador Solutions — Legal marketing agency — matadorsolutions.net 📋 Avvo — Attorney directory — avvo.com ⚖️ Cohen Injury Law Group — Nick's law firm — https://cohenandcohen.net/⭐ Facebook Reviews — facebook.com 📊 GEO (Generative Engine Optimization) — The emerging discipline of optimizing for AI-driven search engines ⭐ Google Reviews — google.com/business 📋 FindLaw — Attorney directory — findlaw.com 📋 Super Lawyers — Attorney directory — superlawyers.com ⭐ Yelp — yelp.com 💻 Hardware Mentioned in the Conversation 📱 Apple iPhone 15 — Nick's smartphone (approximate model) — apple.com/iphone 📱 Apple iPhone (latest, annual upgrade) — Michael's smartphone — apple.com/iphone 🖥️ Apple Mac Studio (M3 chip) — Michael's desktop — apple.com/mac 🖥️ Apple MacBook Pro (M4 chip) — Nick's primary laptop — apple.com/macbook-pro ☁️ Software & Cloud Services Mentioned in the Conversation 📞 CallRail — Call tracking & marketing ROI — callrail.com 🤖 ChatGPT (OpenAI) — AI assistant & AI search — chatgpt.com 🤖 Claude (Anthropic) — AI assistant — claude.ai 🤖 Google AI Overviews — AI-powered search summaries — google.com 📊 Google Business Profile — Local SEO & reviews — business.google.com 🔍 Google Workspace / Google Suite — Productivity & search — workspace.google.com 🤖 Grok (xAI) — AI assistant — x.ai/grok 📋 Jira — Project management — atlassian.com/software/jira 🎙️ Libsyn — Podcast hosting — libsyn.com 🤖 Perplexity — AI search engine — perplexity.ai 💬 Slack — Team communication — slack.com 🌐 WordPress — Website platform — wordpress.org 🎧 Enjoy the episode? Please leave us a ⭐⭐⭐⭐⭐ five-star review on Apple Podcasts or wherever you get your podcast feeds!
📌 To Busy to Read This Week's Editorial? Welcome to the TSL Lab's Initiative. 🤖 This weeks episode builds on my March 3rd, 2026, editorial "Even Though AI Hallucinations Are Down: Lawyers STILL MUST Verify AI, Guard PII, and Follow ABA Ethics Rules ⚖️🤖" is a misleading comfort blanket for lawyers, and how ABA Model Rules on confidentiality, competence, diligence, candor, supervision, and client communication must govern every AI prompt you run. Our Google LLM Notebook hosts translate the theory into practical workflows you can implement today—from document grounding and tokenization to vendor due diligence and line‑by‑line verification—so you can leverage AI confidently without sacrificing ethics, privilege, or your professional license. You will hear how document grounding changes what LLMs actually do, why uploading active case files to cloud AI tools can quietly trigger Rule 1.6 problems, and how cross‑border data flows, vendor training rights, and retention policies can erode privilege if you do not negotiate them carefully. 🔐 We also unpack practical safeguards like tokenization, internal sandbox testing, and bright‑line "danger zones" where AI must never operate unsupervised—especially on open‑ended research, choice of law, and any task that turns statistical text into real‑world legal risk. Finally, we confront the economic paradox: when AI can compress 100 hours of document review into seconds, but partners must still verify every line to protect their licenses, what exactly are clients paying for—and how does the billable hour survive? 💼 👉 Tune in now to learn how to stay tech‑forward without becoming the next ethics cautionary tale, and start designing AI policies that actually protect your clients, your firm, and your bar license. In our conversation, we cover the following 00:00 – Why "96% fewer hallucinations" is still not good enough in law ⚖️ 01:00 – How the remaining 4% error rate can trigger malpractice, sanctions, and ethics violations 02:00 – From IT issue to ethics issue: ABA Model Rules as the real constraint on AI adoption 03:00 – Document grounding 101: turning a free‑floating LLM into a reading‑comprehension engine 04:00 – The hidden danger of "just upload the file": how Rule 1.6 confidentiality is instantly implicated 05:00 – Cloud AI architecture, cross‑border data transfers, GDPR, and privilege risk 🌐 06:00 – Model training nightmares: when your client's trade secrets leak back out through someone else's prompt 07:00 – Negotiating no‑training clauses and ring‑fencing vendor data use (before you upload anything) 08:00 – Tokenization explained: turning John Doe into "Plaintiff 01" without losing legal meaning 🔐 09:00 – What AI does well today: grounded summarization, clause extraction, and playbook‑based redlines 10:00 – The "danger zone" of tasks: open‑ended research, choice of law, and abstract legal reasoning 11:00 – Phantom case law: how LLMs manufacture perfect‑looking but fake citations (and Rule 3.3 candor) 12:00 – Sandboxing AI tools internally and measuring real‑world failure rates against known outcomes 🧪 13:00 – Building bright‑line firm policies around forbidden AI use cases 14:00 – Verification as a workflow, not a suggestion: what Model Rules 5.1 and 5.3 demand from supervisors 15:00 – The efficiency paradox: when partner‑level verification erases associate‑level time savings ⏱️ 16:00 – Making AI verification as routine as a conflict check in your practice 17:00 – Falling hallucination rates, rising risk: why better AI can still make lawyers more vulnerable 18:00 – Client communication under Rule 1.4: when and why clients may be entitled to know you used AI 19:00 – "You can delegate the task, not the liability": Rule 1.2 and ultimate responsibility for AI‑assisted work 20:00 – Treating every AI prompt and ToS as a potential ethics document 📝21:00 – The existential question: if AI drafts in seconds, what exactly are clients paying lawyers for?
In this special episode, I join Professor Wondracek virtually to guest-host Capital's very first Podcast Club session for a live conversation about AI, legal ethics, deepfakes, and metadata. We talk candidly with law students about how AI-generated evidence, consumer AI tools, and digital footprints are already impacting sanctions, privilege, and professional responsibility, then translate those issues into practical safeguards for everyday practice. Whether you are in law school, running a small firm, or managing litigation for a larger organization, this inaugural Podcast Club episode shows how to stay competent, secure, and credible when AI and technology are part of your case strategy. Questions section Join Professor Jennifer Wondracek and me as we discuss the following three questions and more! How do deepfakes and manipulated digital evidence challenge a lawyer's ethical duties under core rules on competence, candor to the tribunal, and honesty? What can we learn from recent cases involving deepfake videos, privilege risks in consumer AI tools, and sanctions for hallucinated citations when designing our own AI workflows? How can lawyers and law students build realistic, sustainable practices for reviewing metadata, using VPNs and secure Wi‑Fi, and choosing secure legal AI and eDiscovery tools? Timestamps In our conversation, we covered the following: 00:00 – Welcome to Capital University Law School's first Podcast Club: live recording and today's focus on AI and ethics 🎓 01:00 – Introducing Michael D.J. Eisenberg as guest host and his work with veterans, and The Tech-Savvy Lawyer.Page blog and podcasting 📚 02:30 – What is a deepfake, and how a staged "Bethesda incident" highlights the real-world risk of fake video evidence 🚨 04:00 – Applying competence rules to technology: why "I didn't know" is not a sustainable defense for lawyers 05:00 – Everyday tech risks: public Wi‑Fi, airports, coffee shops, and why lawyers must use VPNs when client information is involved 🌐 06:30 – Discussing NordVPN, ExpressVPN, and how unsecured sessions can compromise client portals, trust accounts, and email 🔐 07:30 – First steps in vetting digital evidence: what to look for in image files and when to call in a forensic expert 08:30 – Lessons from deepfake litigation and obviously altered video: shadows, color-in-black-and-white, and credibility with the court 🎥 10:00 – Candor to the tribunal and rules against dishonesty, fraud, and misrepresentation in the AI era 11:00 – Student question: can you rely on built-in operating system tools to review metadata, or do you need specialist software? 🖼️ 13:30 – Live demo: opening file properties, reading timestamps, device info, and geotags to validate or challenge evidence 16:00 – When scrubbed metadata makes sense, when it doesn't, and how to request original metadata in discovery 18:00 – Five practical safeguards for new and experienced lawyers: education, protocols, client transparency, updated letters, and constant monitoring of AI changes ✅ 20:00 – Why refusing to learn AI and tech is itself a risk to your bar license and your clients' interests 21:00 – Student Q&A: low-resource firms, large volumes of data, and using sampling plus AI to stretch limited budgets 22:30 – Using legal AI to surface anomalies in documents and metadata while still protecting privilege 23:00 – How consumer AI terms and conditions can put privilege and work product at risk, and what to look for in safer options ⚠️ 24:00 – Free vs paid AI accounts: retention, training, and why personally identifiable information doesn't belong in general chatbots 25:00 – Evaluating legal AI vendors: zero retention, encryption, prompt confidentiality, and subpoena requirements 26:00 – Using tightly controlled legal research platforms and "vault" environments to access models like GPT or Claude securely 🧠 27:00 – Documenting prompts and AI use so that, if questioned by a court or bar, you can show reasonable diligence 28:30 – Reasonable metadata review in practice: random sampling, documenting your process, and knowing when to bring in eDiscovery tools 30:00 – How modern eDiscovery platforms surface metadata and support deeper analysis at scale 📂 31:00 – Staying current on AI and tech: newsletters, bar alerts, court updates, and following The Tech-Savvy Lawyer 32:30 – AI hallucinated citations and sanctions: how one New York matter became a warning to the entire profession 💸 34:30 – Firm-wide consequences when AI misuse becomes a pattern: reputational damage, client impact, and even firm dissolution 36:00 – Owning mistakes, repairing trust with judges, and why transparency matters more than perfection 37:00 – Live giveaway of The Lawyer's Guide to Podcasting during the first Podcast Club session 🎲 38:00 – Inviting students to Capital's upcoming summit/bootcamp and to dinner at the Red Door Tavern, plus closing thoughts on the future of tech competence 🍽️ Resources Connect with Professor Jennifer Wondracek: LinkedIn: https://www.linkedin.com/in/jennifer-wondracek E-Mail: jwondracek@law.capital.edu Software & Services: NordVPN - https://nordvpn.com/ ExpressVPN - https://www.expressvpn.com/ Lexis - https://www.expressvpn.com/ Westlaw - https://legal.thomsonreuters.com/en/westlaw
Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 We unpack the February 23, 2026, editorial AI may not be your co‑counsel—and a recent SDNY decision just made that painfully clear. ⚖️🤖. Our Google Notebook LLM hostsbreaks down why a single click on a public AI tool's Terms of Use can trigger a privilege waiver, and what "tech competence" really means in 2026—especially after United States v. Hoeppner and Judge Jed Rakoff's wake-up-call analysis of confidentiality and third-party disclosure risk. 🔗 Read the full editorial on The Tech-Savvy Lawyer.Page and share this episode with a colleague who is experimenting with AI in client matters. In our conversation, we cover the following 00:00 — The "superhuman assistant" promise, and the procedural nightmare risk. 🧠⚖️ 00:01 — The core warning: AI use can "blow a hole" in privilege. 00:02 — Editorial overview: "The AI Privilege Trap" by Michael D.J. Eisenberg. 00:02 — The case: United States v. Hoeppner (SDNY) and why it matters. 00:03 — Why Judge Jed Rakoff's opinion gets attention (tech-literate, influential). 00:03 — The facts: defendant drafts with a public AI tool, then sends outputs to counsel. 00:04 — The court's conclusion: no attorney-client privilege, no work product protection. 00:05 — Privilege basics applied to AI: "confidential + lawyer" and why AI fails that test. 00:06 — The Terms-of-Use problem: inputs/outputs may be collected and shared. 🧾 00:07 — The "stranger on the street" analogy: you can't retroactively make it confidential. 00:08 — PII and client facts: why pasting sensitive data into public AI is high-risk. 00:08 — ABA Model Rule 1.1: competence includes understanding tech risks. 00:09 — ABA Model Rule 1.6: confidentiality and waiver risk with public AI. 00:10 — "Reasonable safeguards": read policies, adjust settings, and know training/logging. 00:11 — Public vs. enterprise AI: why contracts and "walled gardens" matter. 00:11 — Legal research AI examples discussed: Lexis/Westlaw-style AI offerings. 00:12 — ABA Model Rules 5.1 & 5.3: supervise AI like a nonlawyer assistant/vendor. 00:13 — Redefining "tech-savvy lawyer" in 2026: judgment and restraint. 🧭 00:14 — The "straight-face test": could you defend confidentiality after a judge reads the policy? 00:15 — Client-side risk: clients can sabotage privilege before contacting counsel. 00:16 — Practical takeaway: check settings, read the fine print, keep true secrets offline (for now). 🔒 RESOURCES Mentioned in the episode ABA Model Rules of Professional Conduct (Rules 1.1, 1.4, 1.6, 5.1, 5.3) Software & Cloud Services mentioned in the conversation Lexis (Lexis+ AI category mentioned) — https://www.lexisnexis.com/ Microsoft Word — https://www.microsoft.com/microsoft-365/word Public generative AI "chatbot" tools (general category) — https://en.wikipedia.org/wiki/Chatbot Westlaw (Westlaw AI category mentioned) — https://legal.thomsonreuters.com/en/products/westlaw
Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this episode, we discuss our February 16, 2026, editorial, "Lawyers and AI Oversight: What the VA's Patient Safety Warning Teaches About Ethical Law Firm Technology Use! ⚖️🤖" and explore why treating AI-generated drafts as hypotheses—not answers—is quickly becoming a survival skill for law firms of every size. We connect a real-world AI failure risk at the Department of Veterans Affairs to the everyday ways lawyers are using tools like chatbots, and we translate ABA Model Rules into practical oversight steps any practitioner can implement without becoming a programmer. In our conversation, we cover the following: 00:00:00 – Why conversations about the future of law default to Silicon Valley, and why that's a problem ⚖️ 00:01:00 – How a crisis at the U.S. Department of Veterans Affairs became a "mirror" for the legal profession 🩺➡️⚖️ 00:03:00 – "Speed without governance": what the VA Inspector General actually warned about, and why it matters to your practice 00:04:00 – From patient safety risk to client safety and justice risk: the shared AI failure pattern in healthcare and law 00:06:00 – Shadow AI in law firms: staff "just trying out" public chatbots on live matters and the unseen risk this creates 00:07:00 – Why not tracking hallucinations, data leakage, or bias turns risk management into wishful thinking 00:08:00 – Applying existing ABA Model Rules (1.1, 1.6, 5.1, 5.2, and 5.3) directly to AI use in legal practice 00:09:00 – Competence in the age of AI: why "I'm not a tech person" is no longer a safe answer 🧠 00:09:30 – Confidentiality and public chatbots: how you can silently lose privilege by pasting client data into a text box 00:10:30 – Supervision duties: why partners cannot safely claim ignorance of how their teams use AI 00:11:00 – Candor to tribunals: the real ethics problem behind AI-generated fake cases and citations 00:12:00 – From slogan to system: why "meaningful human engagement" must be operationalized, not just admired 00:12:30 – The key mindset shift: treating AI-assisted drafts as hypotheses, not answers 🧪 00:13:00 – What reasonable human oversight looks like in practice: citations, quotes, and legal conclusions under stress test 00:14:00 – You don't need to be a computer scientist: the essential due diligence questions every lawyer can ask about AI 00:15:00 – Risk mapping: distinguishing administrative AI use from "safety-critical" lawyering tasks 00:16:00 – High-stakes matters (freedom, immigration, finances, benefits, licenses) and heightened AI safeguards 00:16:45 – Practical guardrails: access controls, narrow scoping, and periodic quality audits for AI use 00:17:00 – Why governance is not "just for BigLaw" and how solos can implement checklists and simple documentation 📋 00:17:45 – Updating engagement letters and talking to clients about AI use in their matters 00:18:00 – Redefining the "human touch" as the safety mechanism that makes AI ethically usable at all 🤝 00:19:00 – AI as power tool: why lawyers must remain the "captain of the ship" even when AI drafts at lightning speed 🚢 00:20:00 – Rethinking value: if AI creates the first draft, what exactly are clients paying lawyers for? 00:20:30 – Are we ready to bill for judgment, oversight, and safety instead of pure production time? 00:21:00 – Final takeaways: building a practice where human judgment still has the final word over AI RESOURCES Mentioned in the episode American Bar Association Model Rules of Professional Conduct Interview by Terry Gerton of the Federal News Network of Charyl Mason, Inspector General of the Department of Veterans Affairs, "VA rolled out new AI tools quickly, but without a system to catch mistakes, patient safety is on the line". Software & Cloud Services mentioned in the conversation ChatGPT — https://chat.openai.com/ Lexis - https://www.lexisnexis.com Westlaw - https://legal.thomsonreuters.com
My next guest is Justin McCallan, founder of StrongSuit, an AI-powered litigation platform built to transform how litigators handle legal research, document review, and drafting while keeping lawyers firmly in control. In this episode, Justin and I dig into practical, real-world workflows that solos, small firms, and big-firm litigators can use today and over the next few years to change the economics, pace, and strategy of litigation—without sacrificing accuracy, ethics, or the quality of advocacy. Join Justin and me as we discuss the following three questions and more! What are the top three ways litigators should be using AI tools like StrongSuit right now to change the economics and pace of litigation without sacrificing accuracy, ethics, or quality of advocacy? What are the top three mistakes lawyers make when adopting AI for litigation, and what practical workflows help lawyers stay in the loop and use AI as a force multiplier instead of a risk? Looking ahead to 2026 and beyond, what are the top three AI-driven workflows every litigator should master to stay competitive, and how can platforms like StrongSuit help build those capabilities into day-to-day practice? In our conversation, we cover the following 00:00 – Welcome and guest introduction Justin joins the show and shares his current tech setup at his desk. 00:00–01:00 – Justin's current tech stack Lenovo laptop, ultra-wide monitor, and regular use of StrongSuit, ChatGPT, and Gemini for different AI tasks. Everyday tools: Microsoft Word and Power BI for analytics and fast decision-making. 01:00–02:00 – Android vs. iPhone for AI use Why Justin has been on Android for 17 years and how UI/UX familiarity often drives device choice more than AI capability. 02:00–05:30 – Q1: Top three ways litigators should be using AI right now Using AI for end-to-end legal research across 11 million precedential U.S. cases to build litigation outlines and identify key authorities. Scaling document review so AI surfaces relevant documents and synthesizes insights while lawyers focus on strategy and judgment. Leveraging AI for drafting and editing—improving style, clarity, and consistency beyond traditional spelling and grammar checks. 05:30–07:30 – StrongSuit vs. basic tools like Word grammar check How StrongSuit aims to "up-level" a lawyer's writing, not just catch typos. Stylistic improvements, clarity enhancements, and catching subtle inconsistencies in legal documents. 06:00–08:00 – AI context limits and scaling doc review Constraints of large models' context windows (around ~1M tokens ≈ ~750 pages). How StrongSuit runs multiple AI agents in parallel, each handling small page sets with heuristics to maintain cohesion and share insights. 08:00–09:00 – Handling tens of thousands of documents How StrongSuit can handle between roughly 10,000–50,000 pages at a time, with the ability to scale further for enterprise matters. 09:00–11:30 – Origin story of StrongSuit Why Justin saw a once-in-a-generation opportunity when large language models emerged and how law, with its precedent and text-heavy nature, is especially suited to AI. StrongSuit's focus on litigators: supporting lawyers from intake through trial while keeping them in the loop at every step. 11:30–13:30 – From intake to brief drafting in minutes Generating full litigation outlines, research, and analysis in about ten minutes, then moving directly into drafting memos, briefs, complaints, and motions. StrongSuit's long-term goal: automating 50–99% of major litigation workflows by the end of 2026 while preserving lawyer control and judgment. 12:00–14:30 – How StrongSuit tackles hallucinations Building a full database of all precedential U.S. cases enriched with metadata: parties, summaries, holdings, and more. Validating citations by checking whether the Bluebook citation actually exists in StrongSuit's case database before surfacing it to the user. Why lawyers should still review cases on-platform before filing, even when AI has filtered out hallucinations. 14:30–16:30 – Coverage and jurisdictions Coverage of all U.S. jurisdictions, federal and state, focused on precedential cases. Handling most regulations from administrative agencies, and limits around local ordinances. Uploading your own case files and using complaints and prior research as inputs into StrongSuit workflows. 15:00–17:00 – Security and confidentiality for litigators SOC 2 compliance and industry-standard encryption at rest and in transit. No model training on user data. Optional end-to-end encryption that can even prevent developers from accessing case content, using local encryption keys. 16:30–20:30 – Q2: Top mistakes lawyers make when adopting AI for litigation Mistake #1: Talking about AI instead of diving in with structured experiments and sanitized documents. Using a framework to identify high-impact tasks: high volume, repetitive work, and heavy data/analysis (e.g., doc review, research, contract drafting). How to shortlist tools: look for SOC 2, real product depth, awards, and a focus on your specific workflows. Mistake #2: Expecting immediate mastery instead of moving through predictable adoption stages—from learning the tool, to daily use, to stringing workflows together. 20:30–22:30 – Building firm-wide AI workflows over time Moving from isolated experiments to integrated, low-friction workflows, such as automatic intake-to-research pipelines. Using client intake audio or transcripts to automatically extract facts, issues, and research paths. 22:30–24:30 – Time constraints and "no-time" lawyers Why lawyers don't need to be "technical" to use StrongSuit. Reframing AI as text-based tools where lawyers' writing skills and analytical thinking are assets, not obstacles. 24:00–26:00 – Practical workflows beyond intake Using AI to prepare for expert depositions, including reviewing valuation analyses, flagging departures from market consensus, and generating targeted questions. Reinforcing the value of AI-enhanced legal research and drafting as core litigation workflows. 26:00–29:30 – Q3: 2026 and beyond – AI-driven workflows every litigator should master Rapid improvement of baseline models (e.g., jumping from single-digit to high double-digit performance on difficult benchmarks year over year). The idea of "tipping points," where small performance gains turn AI from marginally useful to essential in specific tasks. Why legal research is a great training ground for understanding where AI excels, where it falls short, and how to divide labor between human and machine. The value of learning basic prompting skills to get more from AI systems, even when platforms offer visual workflows. 29:30–32:30 – Will workflows actually change—or just get better? Why Justin expects familiar litigation workflows (doc review, research, drafting) to remain structurally similar, but become far faster and more sophisticated. AI agents handling the grind work while lawyers focus on synthesis, judgment, and strategy. A future where "AI + lawyer vs. AI + lawyer" resembles high-level chess: same rules, but much deeper thinking on both sides. 32:30–End – Where to find Justin and StrongSuit How to connect with Justin and learn more about StrongSuit's litigation tools. Resources Connect with Justin Justin McCallan on LinkedIn – https://www.linkedin.com/in/justin-mccallon/ StrongSuit website – https://www.strongsuit.com Hardware mentioned in the conversation Android smartphone – https://www.android.com Lenovo laptop – https://www.lenovo.com Software & Cloud Services mentioned in the conversation ChatGPT – https://chat.openai.com Gemini – https://gemini.google.com Microsoft Power BI – https://powerbi.microsoft.com Microsoft Word – https://www.microsoft.com/microsoft-365/word StrongSuit AI litigation platform – https://www.strongsuit.com 🤖
Everyday devices can capture extraordinary evidence, but the same tools can also manufacture convincing fakes. 🎥⚖️ In this episode, we unpack our February 9, 2026, editorial on how courts are punishing fake digital and AI-generated evidence, then translate the risk into practical guidance for lawyers and legal teams. You'll hear why judges are treating authenticity as a frontline issue, what ethical duties get triggered when AI touches evidence or briefing, and how a simple "authenticity playbook" can help you avoid career-ending mistakes. ✅ In our conversation, we cover the following 00:00:00 – Preview: From digital discovery to digital deception, and the question of what happens when your "star witness" is actually a hallucination or deepfake 🚨 00:00:20 – Introducing the editorial "Everyday Tech, Extraordinary Evidence Again: How Courts Are Punishing Fake Digital and AI Data." 📄 00:00:40 – Welcome to the Tech-Savvy Lawyer.Page Labs Initiative and this AI Deep Dive Roundtable 🎙️ 00:01:00 – Framing the episode: flipping last month's optimism about smartphones, dash cams, and wearables as case-winning "silent witnesses" to their dark mirror—AI-fabricated evidence 🌗 00:01:30 – How everyday devices and AI tools can both supercharge litigation strategy and become ethical landmines under the ABA Model Rules ⚖️ 00:02:00 – Panel discussion opens: revisiting last month's "Everyday Tech, Extraordinary Evidence" AI bonus and the optimism around smartphone, smartwatch, and dash cam data as unbiased proof 📱⌚🚗 00:02:30 – Remembering cases like the Minnesota shooting and why these devices were framed as "ultimate witnesses" if the data is preserved quickly enough 🕒 00:03:00 – The pivot: same tools, new threats—moving from digital discovery to digital deception as deepfakes and hallucinations enter the evidentiary record 🤖 00:03:30 – Setting the "mission" for the episode: examining how courts are reacting to AI-generated "slop" and deepfakes, with an increasingly aggressive posture toward sanctions 💣 00:04:00 – Why courts are on high alert: the "democratization of deception," falling costs of convincing video fakes, and the collapse of the old presumption that "pictures don't lie" 🎬 00:04:30 – Everyday scrutiny: judges now start with "Where did this come from?" and demand details on who created the file, how it was handled, and what the metadata shows 🔍 00:05:00 – Metadata explained as the "data about the data"—timestamps, software history, edit traces—and how it reveals possible AI manipulation 🧬 00:06:00 – Entering the "sanction phase": why we are beyond warnings and into real penalties for mishandling or fabricating digital and AI evidence 🚫 00:06:30 – Horror Story #1 (Mendon v. Cushman & Wakefield, Cal. Super. Ct. 2025): plaintiffs submit videos, photos, and screenshots later determined to be deepfakes created or altered with generative AI 🧨 00:07:00 – Judge Victoria Kakowski's response: finding that the deepfakes undermined the integrity of judicial proceedings and imposing terminating sanctions—"death penalty" for the lawsuit ⚖️ 00:07:30 – How a single deepfake "poisons the well," destroying the court's trust in all of a party's submissions and forfeiting their right to the court's time 💥 00:08:00 – Horror Story #2 (S.D.N.Y. 2023): the New York "hallucinating lawyer" case where six imaginary cases generated by ChatGPT were filed without verification 📚 00:08:30 – Rule 11 sanctions and humiliation: Judge Castel's order, monetary penalty, and the requirement to send apology letters to real judges whose names were misused ✉️ 00:09:00 – California follow-on: appellate lawyer Amir Mustaf files an appeal brief with 21 fake citations, triggering a 10,000-dollar sanction and a finding that he did not read or verify his own filing 💸 00:09:30 – Courts' reasoning: outsourcing your job to an AI tool is not just being wrong—it is wasting judicial resources and taxpayer money 🧾 00:10:00 – Do we need new laws? Why Michael argues that existing ABA Model Rules already provide the safety rails; the task is to apply them to AI and digital evidence, not to reinvent them 🧩 00:10:20 – Rule 1.1 (competence): why "I'm not a tech person" is no longer a viable excuse if you use AI to enhance video or draft briefs without understanding or verifying the output 🧠 00:11:00 – Rule 1.6 (confidentiality): the ethical minefield of uploading client dash cam video or wearable medical data to consumer-grade AI tools and risking privilege leakage ☁️ 00:11:30 – Training risk: how client data can end up in model training sets and why "quick AI summaries" can inadvertently expose secrets 🔐 00:12:00 – Rules 3.3 and 4.1 (candor and truthfulness): presenting AI-altered media as original or failing to verify AI output can now be treated as misrepresentation 🤥 00:12:30 – Rules 5.1–5.3 (supervision): why partners and supervising lawyers remain on the hook for juniors, staff, and vendors who misuse AI—even if they didn't personally type the prompts 🧑💼 00:13:00 – Authenticity Playbook, Step 1: mindset shift—never treat AI as a "silent co-counsel"; instead, treat it like a very eager, very inexperienced, slightly drunk intern who always needs checking 🍷🤖 00:13:30 – Authenticity Playbook, Step 2: preserve the original and disclose any AI enhancement; build a clean chain of custody while staying transparent about edits 🎞️ 00:14:00 – Authenticity Playbook, Step 3: using forensic vendors as authenticity firewalls—experts who can certify that metadata and visual cues show no AI manipulation 🛡️ 00:14:30 – Authenticity Playbook, Step 4: "train with fear" by showing your team real orders, sanctions, and public shaming rather than relying on abstract ethics lectures ⚠️ 00:15:00 – Authenticity Playbook, Step 5: documenting verification steps—logging files, tools, and checks so you can demonstrate good faith if a judge questions your evidence 📝 00:16:00 – Bigger picture: the era of easy, unchallenged digital evidence is over; mishandled tech can now produce "extraordinary sanctions" instead of extraordinary evidence 🧭 00:16:30 – Authenticity as "the moral center of digital advocacy": if you cannot vouch for your digital evidence, you are failing in your role as an advocate 🏛️ 00:17:00 – Future risk: as deepfakes become perfect and nearly impossible to detect with the naked eye, forensic expertise may become a prerequisite for trusting any digital evidence 🔬 00:17:30 – "Does truth get a price tag?"—whether justice becomes a luxury product if only wealthy parties can afford authenticity firewalls and expert validation 💼 00:18:00 – Closing reflections: fake evidence, real consequences, and the call to verify sources and check metadata before you file ✅ 00:18:30 – Closing: invitation to visit Tech-Savvy Lawyer.Page for the full editorial, resources, and to like, subscribe, and share with colleagues who need to stay ahead of legal tech innovation 🌐 Resources Cases In Mendones v. Cushman & Wakefield, Inc. (Cal. Super. Ct. Alameda County, 2025), plaintiffs submitted multiple videos, photos, and screenshots that the court determined were deepfakes or altered with generative AI.📹 Judge Victoria Kolakowski found intentional submission of false testimony and imposed terminating sanctions, dismissing the case outright and emphasizing that deepfake evidence "fundamentally undermines the integrity of judicial proceedings."⚖️ In New York, two lawyers became infamous in 2023 after filing a brief containing six imaginary cases generated by ChatGPT; Judge P. Kevin Castel sanctioned them under Rule 11 for abandoning their responsibilities and failing to verify the authorities they cited.📑 They were ordered to pay a monetary penalty and to notify the real judges whose names had been falsely invoked, a reputational hit that far exceeded the dollar amount.💸 A California appellate lawyer, Amir Mostafavi, was later fined $10,000 for filing an appeal with twenty‑one fake case citations generated by ChatGPT.💻 The court stressed that he had not read or verified the AI‑generated text, and treated that omission as a violation of court rules and a waste of judicial resources and taxpayer money.⚠️ ABA Model Rules Rule 1.1 (Competence): You must understand the benefits and risks of relevant technology, which now clearly includes generative AI and deepfake tools.⚖️ Using AI to draft or "enhance" without checking the output is not a harmless shortcut—it is a competence problem. Comment 8's duty of technological competence; the new sanctions landscape simply clarifies the stakes.📚 Rule 1.6 (Confidentiality): Uploading client videos, wearable logs, or sensitive communications to consumer‑grade AI sites can expose them to unknown retention and training practices, risking confidentiality violations.🔐 Rule 3.3 (Candor to the Tribunal) and Rule 4.1 (Truthfulness): Presenting AI‑altered video or fake citations as if they were genuine is the very definition of misrepresentation, as the New York and California sanction orders make clear.⚠️ Even negligent failure to verify can be treated harshly once the court's patience for AI excuses runs out. Rules 5.1–5.3 (Supervision): Supervising lawyers must ensure that associates, law clerks, and vendors understand that AI outputs are starting points, not trustworthy final products, and that fake or manipulated digital evidence will not be tolerated.👥
📌 To Busy to Read This Week's Editorial? Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this Tech-Savvy Lawyer Page Labs Initiative episode, AI co-hosts walk through how high‑profile "legal tech wars" between practice‑management vendors and AI research startups can push your client data into the litigation spotlight and create real ethics exposure under ABA Model Rules 1.1, 1.6, and 5.3. We'll explore what happens when core platforms face federal lawsuits, why discovery and forensic audits can put confidential matters in front of third parties, and how API lockdowns, stalled product roadmaps, and forced sales can grind your practice operations to a halt. More importantly, you'll get a clear five‑step action plan—inventorying your tech stack, confirming data‑export rights, mapping backup providers, documenting diligence, and communicating with clients—that works even if you consider yourself "moderately tech‑savvy" at best. Whether you're a solo, a small‑firm practitioner, in‑house, or simply AI‑curious, this conversation will help you evaluate whether you are the supervisor of your legal tech—or its hostage. 🔐 In our conversation, we cover the following 00:00:00 – Setting the stage: Legal tech wars, "Godzilla vs. Kong," and why vendor lawsuits are not just Silicon Valley drama for spectators. 00:01:00 – Introducing the Tech-Savvy Lawyer Page Labs Initiative and the use of AI-generated discussions to stress-test legal tech ethics in real-world scenarios. 00:02:00 – Who's fighting and why it matters: Clio as the "nervous system" of many firms versus Alexi as the "brainy intern" of AI legal research. 00:03:00 – The client data crossfire: How disputes over data access and training AI tools turn your routine practice data into high-stakes litigation evidence. 00:04:00 – Allegations in the Clio–Alexi dispute, from improper data access to claims of anti-competitive gatekeeping of legal industry data. 00:05:00 – Visualizing risk: Client files as sandcastles on a shelled beach and why this reframes vendor fights as ethics issues, not IT gossip. 00:06:00 – ABA Model Rule 1.1 (Competence): What "technology competence" really entails and why ignorance of vendor instability is no longer defensible. 00:07:00 – Continuity planning as competence: Injunctions, frozen servers, vendor shutdowns, and how missed deadlines can become malpractice. 00:08:00 – ABA Model Rule 1.6 (Confidentiality): The "danger zone" of treating the cloud like a bank vault and misunderstanding who really holds the key. 00:09:00 – Discovery risk explained: Forensic audits, third‑party access, protective orders that fail, and the cascading impact on client secrets. 00:10:00 – Data‑export rights as your "escape hatch": Why "usable formats" (CSV, PDF) matter more than bare contractual promises. 00:11:00 – Practical homework: Testing whether you can actually export your case list today, not during a crisis. 00:12:00 – ABA Model Rule 5.3 (Supervision): Treating software vendors like non‑lawyer assistants you actively supervise rather than passive utilities. 00:13:00 – Asking better questions: Uptime, security posture, and whether your vendor is using your data in its own defense. 00:14:00 – Operational friction: Rising subscription costs, API lockdowns, broken integrations, and the return of manual copy‑pasting. 00:15:00 – Vaporware and stalled product roadmaps: How litigation diverts engineering resources away from features you are counting on. 00:16:00 – Forced sales and 30‑day shutdown notices: Data‑migration nightmares under pressure and why waiting is the riskiest strategy. 00:17:00 – The five‑step moderate‑tech action plan: Inventory dependencies, review contracts, map contingencies, document diligence, and communicate with nuance. 00:18:00 – Turning risk management into a client‑facing strength and part of your value story in pitches and ongoing relationships. 00:19:00 – Reframing legal tech tools as members of your legal team rather than invisible utilities. 00:20:00 – "Supervisor or hostage?": The closing challenge to check your contracts, your data‑export rights, and your practical ability to "fire" a vendor. Resources Mentioned in the episode ABA Model Rule 1.1 – Competence (Technology Competence Comment) – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/ ABA Model Rule 1.6 – Confidentiality of Information – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_6_confidentiality_of_information/ ABA Model Rule 5.3 – Responsibilities Regarding Nonlawyer Assistance – https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_3_responsibilities_regarding_nonlawyer_assistance/ Tech-Savvy Lawyer Page (February 2, 2026, Editorial & Show Notes Hub) – https://www.thetechsavvylawyer.page/blog/2026/2/2/mtc-clioalexi-legal-tech-fight-what-crm-vendor-litigation-means-for-your-law-firm-client-data-and-aba-model-rule-compliance- Software & Cloud Services mentioned in the conversation Clio – Cloud-based legal practice management platform – https://www.clio.com Alexi – AI‑driven legal research platform – https://www.alexi.com AWS (Amazon Web Services) – Cloud infrastructure provider – https://aws.amazon.com Google Cloud – Cloud infrastructure provider – https://cloud.google.com
Our next guest is Nick Martin, CEO of FileScience. He shares expert insights on stabilizing law firm operations with smart backups and automation. Join us to discover practical, easy-to-implement ways to protect your data from outages and errors, so your clients' information stays safe, secure, and accessible when you need it most. Listen in with Nick Martin and me as we discuss the following three questions and more! 💡 When a firm is drowning in document chaos, what are the first three specific workflows to digitize or automate to stabilize operations? Beyond just losing documents, what are the three specific silent killers of document hygiene that lawyers ignore? How do lawyers solve the top three friction points of digital collaboration: version conflicts, insecure sharing methods, and the loss of institutional knowledge buried inside files? In our conversation, we cover the following 📊 00:00 – Guest intro and Nick's tech setup (MacBook Pro, iPad, iPhone 15, Bang & Olufsen speaker) 🔊 00:30 – Q1: Digitizing workflows – unification of memory, forever undo button, retention 🛡️ 04:00 – Backups for iManage, NetDocuments, Clio, FileVine; air-gapped copies 📁 06:00 – Microsoft 365 outage resilience with FileScience ☁️ 08:00 – Retention periods (5-7 years by state/practice); NY lawful order policy ⚖️ 10:00 – Q2: Silent killers – file degradation, wrong versions, insider threats 🕵️ 13:00 – Q3: Solving friction – immutable timelines, encryption (Purview, CBC), institutional knowledge preservation 🔒 15:00 – End-to-end encryption details; where to find Nick Resources 🔗 Connect with Nick Martin 🤝 FileScience website: https://filescience.io Nick Martin LinkedIn: https://www.linkedin.com/in/nicholasmmartin FileScience LinkedIn: https://www.linkedin.com/company/filescience FileScience Instagram: https://www.instagram.com/filescience Mentioned in the episode 📚 Microsoft 365 outage (recent North America impact): https://www.usatoday.com/story/tech/2026/01/22/microsoft-outage-service-down/88305485007/ Hardware mentioned in the conversation 💻 Bang & Olufsen Beosound Balance (360° omnidirectional speaker): https://www.bang-olufsen.com/en/us/speakers/beosound-balance iPad: https://www.apple.com/ipad iPhone 15: https://www.apple.com/iphone MacBook Pro 16-inch: https://www.apple.com/macbook-pro Software & Cloud Services mentioned in the conversation ☁️ AWS, Azure, Google Cloud (underlying providers): https://aws.amazon.com, https://azure.microsoft.com, https://cloud.google.com Clio: https://www.clio.com FileVine: https://www.filevine.com Google Workspace: https://workspace.google.com iManage: https://www.imanage.com Microsoft 365 (Outlook, Purview encryption, CBC): https://www.microsoft.com/en-us/microsoft-365 NetDocuments: https://www.netdocuments.com[filescience] Parallels (VMs): https://www.parallels.com
Join us for an AI-powered deep dive into the ethical challenges facing legal professionals in the age of generative AI. 🤖 In this Tech-Savvy Lawyer.Page Labs episode, our Google AI hosts unpack our January 26, 2026, editorial and discuss how everyday devices—smartphones, dash cams, wearables, and connected cars—are becoming "silent witnesses" that can make or break your next case, while walking carefully through ABA Model Rules on competence, candor, privacy, and preservation of digital evidence. In our conversation, we cover the following: 00:00 – Welcome to The Tech-Savvy Lawyer.Page Labs Initiative and this week's "Everyday Tech, Extraordinary Evidence" AI roundtable 🧪 00:30 – Why classic "surprise witness" courtroom drama is giving way to always-on digital witnesses 🎭 01:15 – Introducing the concept of smartphones, dash cams, and wearables as objective "silent witnesses" in litigation 📱 02:00 – Overview of Michael D.J. Eisenberg's editorial "Everyday Tech, Extraordinary Evidence" and his mission to bridge tech and courtroom practice 📰[ 03:00 – Case study setup: the Alex Preddy shooting in Minneapolis and the clash between official reports and digital evidence ⚖️ 04:00 – How bystander smartphone video reframed the legal narrative in the Preddy matter and dismantled "brandished a weapon" claims 🎥 05:00 – From "pressing play" to full video synchronization: building a unified timeline from multiple cameras to audit police reports 🧩06:00 – Using frame-by-frame analysis to test loaded terms like "lunging," "aggressive resistance," and "brandishing" against what the pixels actually show 🔍 07:00 – Moving beyond what we see: introducing "quiet evidence" such as GPS logs, telemetry, and sensor data as litigation tools 📡 08:00 – GPS data for location, duration, and speed: turning "he was charging" into a measurable movement profile in protest and road-rage cases 🚶♂️🚗 09:00 – Layering GPS from phones with vehicle telematics to create a multi-source reconstruction that is hard to impeach in court 📊 10:00 – Dash cams as 360-degree witnesses: solving blind spots of human perception and single-angle video 🛞 11:00 – Why exterior audio from dash cams—shouts, commands, crowd noise—can be crucial to proving state of mind and mens rea 🔊 12:00 – Wearables as a body-wide sensor network: heart rate, sleep, and step count as quantitative proof of pain, fear, and trauma ⌚ 13:00 – Using longitudinal wearable data to support claims of emotional distress or sleep disruption in personal injury and civil-rights litigation 😴 14:00 – Heart-rate spikes and movement logs at the moment of an encounter as corroboration of fear or immobility in use-of-force matters 15:00 – Why none of this evidence exists in your case file unless you know to ask for it at intake 🗂️ 16:00 – Updating intake: adding questions about smartwatches, location services, doorbell cameras, dash cams, and connected cars to your client questionnaires 📝 17:00 – Data preservation as an emergency task: deletion cycles, cloud overwrites, and using TROs to stop digital spoliation 🚨 18:00 – Turning raw logs into compelling visuals: maps, synced clips, and timelines that juries can understand without sacrificing accuracy 🗺️ 19:00 – Ethics spotlight: ABA Model Rule 1.1 competence and Comment 8—why "I'm not a tech person" is now an ethical problem, not an excuse 📚 20:00 – Candor to the tribunal and the line between strong advocacy and fraud when editing or excerpting digital evidence ⚠️ 21:00 – Respecting third-party privacy under Rule 4.4: when you must blur faces, redact audio, or limit collateral exposure of bystanders 🧩 22:00 – Advising clients not to delete texts, videos, or logs and explaining spoliation risks under Rule 3.4 ⚖️ 23:00 – The uranium analogy: digital tools as powerful but dangerous if used without adequate ethical "containment" ☢️ 24:00 – Philosophical closing: will juries someday trust heart-rate logs more than tears on the witness stand, and what does that mean for human testimony? 🤔 25:00 – Closing remarks and invitation to explore the full editorial, show notes, and resources on The Tech-Savvy Lawyer.Page 🌐 If you enjoyed this episode, please like, comment, subscribe, and share!
In this special episode, recorded live from Podfest 2026 in Orlando, FL at the Renaissance Marriott Hotel near SeaWorld, I was able to gather several attendees who are in the legal world—lawyers and legal industry marketers—to talk about why lawyers should podcast and more! 🎙️ Our roundtable features Dennis "DM" Meador (Legal Podcast Network), Louis Goodman (Love Thy Lawyer), Robert Ingalls (Lawpods), Wendi Wittner (The Writing Guru), and Elizabeth Gearhart (Passage to Profit / Gearhart Law), each bringing deep experience in podcasting, legal marketing, and personal branding for lawyers. We discuss practical, no-fluff insights about how lawyers can use podcasting to build authority, strengthen SEO, show up in large language models (LLMs) like ChatGPT, and connect more authentically with clients and referral sources. Whether you are tech-curious, tech-comfortable, or completely new to podcasting, this episode will help you decide if starting a podcast makes strategic sense for your practice or business. QUESTIONS WE DISCUSS 🎯 Join Dennis, Louis, Robert, Wendi, Elizabeth, and me as we discuss the following three questions and more! Why should lawyers be podcasting in 2026 and beyond, especially with Gen Z and Gen Alpha getting so much of their trusted information from podcasts and social platforms? What is one of the first concrete steps a lawyer should take if they are seriously considering launching a podcast of their own? What is one of the biggest mistakes lawyers should watch out for when launching a podcast, and how can they avoid becoming a "zombie podcast" that dies after a few episodes? 🧟♂️ Additional themes we explore include: How podcasting acts as an "electronic resume" and trust-building tool for lawyers. How podcasts can drive SEO, get you discovered in LLMs like ChatGPT, Google Gemini, Perplexity, and Claude, and generate traffic to your law firm website. Why your podcast does not always need to be "about the law" to be effective for your legal brand. How to balance authenticity (including salty language) with your professional brand and ethics rules. TIME-STAMPED EPISODE GUIDE ⏱️ In our conversation, we cover the following: 00:00 – Welcome & guest introductions Live from Podfest 2026: intros from Dennis "DM" Meador (Legal Podcast Network), Louis Goodman (Love Thy Lawyer), Robert Ingalls (Lawpods), Wendi Wittner (The Writing Guru), and Elizabeth Gearhart (Passage to Profit / Gearhart Law). 02:00 – Why should lawyers be podcasting? Gen Z and Gen Alpha treat podcasts as a top trusted media source. 📲 Podcasting vs TikTok for lawyers who don't want to dance but still want reach. Podcast as "electronic resume" and branding vehicle for lawyers and judges. 04:30 – Is podcasting right for every lawyer? Robert on why not every lawyer should podcast, and why goals matter. How a podcast helps potential clients decide if you are "their" lawyer—or not. 06:30 – Personality, language, and fit The Tampa PI lawyer who refuses to bleep swear words to attract the right clients and repel the wrong ones. 🤬 Why authenticity can be a powerful qualification tool, not a liability. 08:00 – Podcasting as a marketing engine Turning a 30–60 minute recording into video clips, written content, and evergreen assets. How podcast content keeps working for you long after the recording session. 09:30 – Personal branding and storytelling for lawyers Wendi on using podcasts to develop a personal brand, tell your story, and highlight your "superpower" as a lawyer. Why sharing your career pivots and non-traditional path resonates deeply with listeners. 12:00 – Getting discovered in ChatGPT and other LLMs Elizabeth on using a podcast and transcripts to improve visibility in ChatGPT, Google Gemini, Perplexity, and Claude. 🤖 How regular podcasting and transcript optimization sustained and improved hits from LLMs to Gearhart Law's website. 15:30 – Future-proofing and "language-based internet" DM explains why we're moving from a page-based to a language-based internet and why early podcast adopters will win—similar to early website and SEO adopters. Podcasting as both "future-proofing" and "present-proofing" your practice. 18:00 – Hobby vs business podcast Louis on starting his podcast as a social hobby and discovering the SEO and networking upside. How a niche local legal podcast can drive referrals and reputation even without direct monetization. 21:00 – How personal is too personal? Robert's own experience evolving his podcast from estate planning to broader personal topics. Balancing sharing about yourself with focusing on the listener's problem (StoryBrand "guide vs hero" concept). 25:00 – Beyond law: topic flexibility Why your legal podcast can focus on tech, politics, entrepreneurship, or hobbies while still supporting your legal brand. Examples of lawyers podcasting about politics and broader societal issues to grow recognition. 28:30 – Helping lawyers find their story Wendi's process: asking about upbringing, first-generation experiences, career pivots, athletic feats, and long-term goals to unlock unique stories. How those stories fuel compelling podcast episodes and stronger interviews. 34:00 – Thinking beyond your current role Using podcasting and personal branding to position yourself for boards, politics, and second careers outside traditional law practice. 37:00 – AI hallucinations & validating LLM output Elizabeth's workflow for cross-checking answers across ChatGPT, Gemini, Perplexity, Claude, and Grok. Why LLMs "love" natural, conversational podcast transcripts as training material. 40:00 – Networking power of "you should be on my podcast" How inviting people as guests changes the dynamic at networking events. 🤝 Using podcast guest outreach on LinkedIn and pod-match style platforms. 43:00 – Content, authority, and algorithm signals DM on why consistent, custom content will always outperform gimmicks in SEO and algorithm changes. How podcasts support authority, trust, and long-term discoverability in search and LLMs. 48:00 – Question #2: First steps for lawyers considering a podcast Robert and DM: "Know your why" and who your ideal listener/client really is. Are you using the show for lead nurturing, referral education, or brand visibility? 52:00 – Political/legal shows and indirect monetization Discussion of political/legal commentary podcasts that soft-sell the firm. Why they can work—but why expectations and time horizon matter. 56:00 – Brand consistency before you launch Wendi on auditing your website, LinkedIn, business page, and social handles for consistent branding (e.g., "The Writing Guru"). Using CTAs and data capture to turn podcast listeners into contacts. 59:00 – Knowing your deeper "why" Elizabeth's "peel the onion" exercise: repeatedly asking why until you reach the core motivation, often helping people out of "impossible situations." 1:03:00 – Solo vs agency vs studio Pros and cons of DIY gear and production vs working with podcast agencies or studios. Why time value, ethics, and avoiding scams all matter for lawyers. 1:08:00 – Ethics, multi-jurisdiction practice, and global reach How legal ethics, multistate audiences, and global distribution impact what lawyers can say on their podcasts. 1:12:00 – Question #3: Biggest mistakes lawyers make launching a podcast Elizabeth: ethics, off-the-cuff comments, and aligning tone (including swearing) with your brand and practice area. Wendi: perfectionism vs progress—accepting that early episodes will be imperfect but valuable. Robert: no long-term plan and no content strategy, leading to inconsistency and podfade. Louis: underestimating time; a solid 30 minutes of content may require several hours early on. DM: expecting immediate impact and treating podcasting like a short-term campaign instead of a long-term asset. 1:22:00 – Test-driving podcasting as a guest first Why appearing as a guest on other shows (via Podmatch and similar platforms) is a smart "trial run" before launching your own. 1:25:00 – Where to find today's guests & closing Each guest shares their preferred platforms, emails, and websites so you can connect and learn more. RESOURCES 📚 Connect with our Guests Louis Goodman ⚖️ Podcast: Love Thy Lawyer ("Love v. Lawyer") https://www.lovethylawyer.com LinkedIn Profile https://www.linkedin.com/in/louis-goodman-3b359/ Elizabeth Gearhart 📻 Gearhart Law (Chief Marketing Officer) https://www.gearhartlaw.com[ Passage to Profit Show (syndicated radio show & podcast) https://podcasts.apple.com/us/podcast/passage-to-profit-show-road-to-entrepreneurship/id1481650359 Email elizabeth@gearhartlaw.com LinkedIn (active profile) https://www.linkedin.com/in/elizabeth-gearhart-ph-d-97ba0b51/ Robert Ingalls 🎧 Lawpods (Founder & CEO – podcast agency for law firms) https://www.lawpods.com LinkedIn https://www.linkedin.com/in/robertingalls/ Dennis "DM" Meador 💼 LinkedIn: Dennis Meador – "Dennis Meta like a meadow, but with an R and no W" 🌱 https://www.linkedin.com/in/dennismeador/ Legal Podcast Network (Founder & CEO) https://thelegalpodcastnetwork.com Wendi Wittner ✍️ The Writing Guru – Legal Executive Branding & Career Strategy https://www.thewritingguru.net LinkedIn: Wendi Weiner / "The Writing Guru" https://www.linkedin.com/in/thewritingguru/ Above the Law (Wendi's column) https://abovethelaw.com/author/wendiweiner/ HuffPost article – "How I Used My Law Degree to Get Out of Law" (Wendi) https://www.huffpost.com/entry/how-i-used-my-law-degree-to-get-out-of-law_b_57bef010e4b0b01630ddd15c Mentioned in the episode Non‑Hardware/Software 🔍 Attorney Tom (PI lawyer & content creator) https://www.youtube.com/@AttorneyTom Podfest 2026 (Orlando, FL) https://podfestexpo.com Hardware mentioned 🧰 (Exact models are discussed generally rather than by SKU, but here are representative links to explore.) iPhone https://www.apple.com/iphone Shure-style dynamic microphones 🎙️ https://www.shure.com/en-US/products/microphones
My next guest is Colleen Joyce, CEO of Lawyer.com, a company that connects over 1 million consumers monthly with qualified attorneys across the country. With nearly 20 years of experience transforming how law firms market themselves and manage their operations, Colleen has seen what works and what doesn't when it comes to legal technology adoption. 🚀 Join Colleen Joyce and me as we discuss the following three questions and more! What are the top three non-negotiable technologies? Beyond the essential lead generation that Lawyer.com provides, what specific CRM automations, financial analytics, or project management tools would you implement immediately to ensure a firm scales profitably rather than just chaotically? What are the top three specific intake bottlenecks that AI can now solve better than a human receptionist? Based on the data you're seeing from your new AI initiatives, which intake bottlenecks can AI now solve to allow attorneys to focus primarily on high-value legal work? What are the top three human touchpoints in the client lifecycle that a lawyer should never automate? From your experience overseeing millions of consumer connections, which touchpoints are crucial for building the trust and transparency that leads to long-term referrals? In our conversation, we cover the following: [00:00:00] Episode introduction and title read [00:01:00] Editor's note on audio quality [00:01:30] Welcoming Colleen Joyce to the podcast [00:01:45] Colleen's current tech setup: MacBook Pro, iPhone 16, iPad, and curved Dell monitor [00:02:00] Discussion about iPhone models and AppleCare benefits [00:04:00] MacBook Pro specifications and upgrade recommendations (Intel vs. M chip) [00:05:00] Benefits of curved monitors for productivity and focus [00:06:00] Question #1: Top three non-negotiable technologies for modern law firms [00:07:00] The importance of intake technology and CRM systems [00:07:30] Project management tools for team accountability [00:08:00] Budget-friendly options and freemium platforms for new lawyers [00:09:00] Question #2: AI intake bottlenecks that AI solves better than humans [00:10:00] The value of empathetic AI agents and information capture [00:11:00] Training AI agents for legal-specific scenarios and language [00:12:00] Consumer resistance to AI vs. human agents and the generational shift [00:13:00] Scheduling tools like Calendly and client resistance to automation [00:14:00] Legal profession's technology adoption over the past 3-5 years [00:15:00] The declining use of printers in modern legal practice [00:16:00] Question #3: Human touchpoints that should never be automated [00:17:00] The importance of relationship building during the client onboarding "courting period" [00:18:00] Using technology processes to screen potential clients for fit [00:19:00] Where to find Colleen Joyce and her weekly Fast Five newsletter [00:19:30] Closing remarks and next episode preview RESOURCES Connect with Colleen Joyce LinkedIn: https://www.linkedin.com/in/colleenjoyce/ Company Website: https://www.lawyer.com Newsletter: The Fast Five (published weekly on Tuesdays via LinkedIn) Mentioned in the Episode MacRumors.com - https://www.macrumors.com (Apple product buying guides and release cycles) The Fast Five Newsletter - Weekly newsletter covering AI trends and business growth strategies for law firms Calendly - https://calendly.com (Scheduling automation tool) Hardware Mentioned in the Conversation MacBook Pro (17-inch with Intel chip) - https://www.apple.com/macbook-pro/ MacBook Pro with M4/M5 Chip - https://www.apple.com/macbook-pro/ iPhone 16 - https://www.apple.com/iphone-16/ iPad - https://www.apple.com/ipad/ Dell Curved Monitor (22-24 inch) - https://www.dell.com/monitors HP Printer - https://www.hp.com/printers Sit-Stand Desk - (Various manufacturers) Software & Cloud Services Mentioned in the Conversation Plaud (Audio Recording App) - https://www.plaud.ai iMessage - https://www.apple.com/messages/ Slack - https://slack.com Monday.com - https://monday.com (Project management and team collaboration) ChatGPT - https://chat.openai.com (AI research and recommendations) Calendly - https://calendly.com (Appointment scheduling) AppleCare - https://www.apple.com/support/products/ AI Intake Platforms (Various legal-specific platforms discussed generically) CRM Systems (Various customer relationship management platforms discussed generically) Case Management Systems (Various legal practice management platforms discussed generically)
OUR next guest is Joshua Altman, the Managing Director of Beltway.Media, a DC-based communications firm that specializes in helping brands cut through the noise. A former multimedia journalist for The Hill, Joshua spent several years covering high-stakes federal policy and election cycles from the front lines. Today, he translates that newsroom pace into strategy for professional services firms, startups, and federal agencies. He joins me to discuss why storytelling isn't just a marketing buzzword—it's a critical operating system for modern law practice. Join Joshua Altman and me as we discuss the following three questions and more! What are the top three technology tools or platforms you recommend that would help attorneys transform a single piece of thought leadership into multiple content formats across channels, and how can they use AI to accelerate this process without sacrificing their professional voice? What are the top three mistakes attorneys and law firms make when communicating during high-stakes situations—whether that's managing negative publicity, navigating a client crisis, or pitching to potential investors—and how can technology help them avoid these pitfalls while maintaining their ethical obligations? What are the top three metrics for their online marketing technology investments that attorneys should actually be tracking to demonstrate return on investment, and what affordable technology solutions would you recommend to help them capture and analyze this data? In our conversation, we cover the following: [00:00] Introduction to Joshua Altman and Beltway.Media. [01:06] Joshua's current secure tech stack: From Mac setups to encrypted communications. [03:52] Strategic content repurposing: Using AI as a tool, not a replacement for your voice. [05:30] The "Human in the Loop" necessity: Why lawyers must proofread AI content. [10:00] Tech Recommendation #1: using Abacus.AI and Root LLM for model routing. [11:00] Tech Recommendation #2: Automating workflows with Gumloop. [15:43] Tech Recommendation #3: The "Low Tech" solution of human editors. [16:47] Crisis Communications: Navigating the Court of Public Opinion vs. the Court of Law. [20:00] Using social listening tools for litigation support and witness tracking. [24:30] Metric #1: Analyzing Meaningful Engagement (comments vs. likes). [26:40] Metric #2: Understanding Impressions and network reach (1st vs. 2nd degree). [28:40] Metric #3: Tracking Clicks to validate interest and sales funnels. [31:15] How to connect with Joshua. RESOURCES: Connect with Joshua Altman Email: jaltman@beltway.media LinkedIn: linkedin.com/in/joshuaaltman Website: beltway.media Mentioned in the episode Beltway.Media - Joshua's communications firm. The Hill - Multimedia journalism background. Hardware mentioned in the conversation Apple iPad Pro Apple iPhone Pro Apple MacBook Pro Software & Cloud Services mentioned in the conversation Abacus.AI - AI platform mentioned for its "Root LLM" model routing feature. ChatGPT - AI language model. Claude - AI language model. Constant Contact - Email marketing platform. Gumloop - AI automation platform for newsletters and social listening. LinkedIn - Professional social networking. MailChimp - Email marketing platform. Proton Mail - Encrypted email service. Tresorit - End-to-end encrypted file sharing (secure Dropbox alternative).
Welcome to TSL Labs Podcast Experiment. 🧪🎧 In this special "Deep Dive" bonus episode, we strip away the hype surrounding Generative AI to expose a critical operational risk hiding in plain sight: the dangerous confusion between "Open" and "Closed" AI systems. Featuring an engaging discussion between our Google Notebook AI hosts, this episode unpacks the "Swiss Army Knife vs. Scalpel" analogy that every managing partner needs to understand. We explore why the "Green Light" tools you pay for are fundamentally different from the "Red Light" public models your staff might be using—and why treating them the same could trigger an immediate breach of ABA Model Rule 5.3. From the "hidden crisis" of AI embedded in Microsoft 365 to the non-negotiable duty to supervise, this is the essential briefing for protecting client confidentiality in the age of algorithms. In our conversation, we cover the following: [00:00] – Introduction: The hidden danger of AI in law firms. [01:00] – The "AI Gap": Why staff confuse efficiency with confidentiality. [02:00] – The Green Light Zone: Defining secure, "Closed" AI systems (The Scalpel). [03:45] – The Red Light Zone: Understanding "Open" Public LLMs (The Swiss Army Knife). [04:45] – "Feeding the Beast": How public queries actively train the model for everyone else. [05:45] – The Duty to Supervise: ABA Model Rules 5.3 and 1.1[8] implications. [07:00] – The Hidden Crisis: AI embedded in ubiquitous tools (Microsoft 365, Adobe, Zoom). [09:00] – The Training Gap: Why digital natives assume all prompt boxes are safe. [10:00] – Actionable Solutions: Auditing tools and the "Elevator vs. Private Room" analogy. [12:00] – Hallucinations: Vendor liability vs. Professional negligence. [14:00] – Conclusion: The final provocative thought on accidental breaches. RESOURCES Mentioned in the episode ABA Model Rule 5.3 (Responsibilities Regarding Nonlawyer Assistance): https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_3_responsibilities_regarding_nonlawyer_assistant/ ABA Model Rule 1.1, Comment 8 (Technology Competence): https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/ and https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/comment_on_rule_1_1/#:~:text=of%20these%20Rules.-,Maintaining%20Competence,all%20continuing%20legal%20education%20requirements%20to%20which%20the%20lawyer%20is%20subject.,-Back%20to%20Rule Software & Cloud Services mentioned in the conversation Lexis+ AI (LexisNexis): https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page Protégé (LegalMation/LexisNexis context): https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page Westlaw Precision (Thomson Reuters): https://legal.thomsonreuters.com/en/products/westlaw-precision Co-Counsel (Casetext/Thomson Reuters): https://casetext.com/co-counsel Harvey AI: https://www.harvey.ai/ vLex Vincent AI: https://vlex.com/vincent-ai ChatGPT (OpenAI): https://chat.openai.com/ Perplexity AI: https://www.perplexity.ai/ Claude (Anthropic): https://claude.ai/ Microsoft 365 Copilot (Teams/Word): https://www.microsoft.com/en-us/microsoft-365/enterprise/copilot Adobe Creative Cloud (AI features): https://www.adobe.com/ai/overview.html Zoom AI Companion: https://www.zoom.us/aihttps://www.zoom.com/en/products/ai-assistant/
Our next guest is Nick Tiger, Associate General Counsel at Pearl.com, Nick shares insights on integrating AI into legal practice. Pearl.com champions AI and human expertise for professional services. He outlines practical uses such as market research, content creation, intake automation, and improved billing efficiency, while stressing the need to avoid liability through robust human oversight. Nick is a legal leader at Pearl.com, partnering on product design, technology, and consumer-protection compliance strategy. He previously served as Head of Product Legal at EarnIn, an earned-wage access pioneer, building practical guidance for responsible feature launches, and as Senior Counsel at Capital One, supporting consumer products and regulatory matters. Nick holds a J.D. from the University of Missouri–Kansas City, lives in Richmond, Virginia, and is especially interested in using technology to expand rural community access to justice. During the conversation, Nick highlights emerging tools, such as conversation wizards and expert-matching systems, that enhance communication and case preparation. He also explains Pearl AI's unique model, which blends chatbot capabilities with human expert verification to ensure accuracy in high-stakes or subjective matters. Nick encourages lawyers to adopt human-in-the-loop protocols and consider joining Pearl's expert network to support accessible, reliable legal services. Join Nick and me as we discuss the following three questions and more! What are the top three most impactful ways lawyers can immediately implement AI technology in their practices while avoiding the liability pitfalls that have led to sanctions in recent high-profile cases? Beyond legal research and document review, what are the top three underutilized or emerging AI applications that could transform how lawyers deliver value to clients, and how should firms evaluate which technologies to adopt? What are the top three criteria Pearl uses to determine when human expert verification is essential versus when AI alone is sufficient? How can lawyers apply this framework to develop their own human-in-the-loop protocols for AI-assisted legal work, and how is Perl different from its competitors? In our conversation, we cover the following: [00:56] Nick's Tech Setup [07:28] Implementing AI in Legal Practices [17:07] Emerging AI Applications in Legal Services [26:06] Pearl AI's Unique Approach to AI and Legal Services [31:42] Developing Human-in-the-Loop Protocols [34:34] Pearl AI's Advantages Over Competitors [36:33] Becoming an Expert on Pearl AI Resources: Connect with Nick: Nick's LinkedIn: linkedin.com/in/nicktigerjd Pearl.com Website: pearl.com Pearl.com Expert Application Portal: era.justanswer.com/ Pearl.com LinkedIn: linkedin.com/company/pearl-com Pearl.com X: x.com/Pearldotcom ABA Resources: ABA Formal Opinion 512: https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf Hardware mentioned in the conversation: Anker Backup Battery / Power Bank: anker.com/collections/power-banks Software & Cloud Services mentioned in the conversation: AT&T: att.com/ Pearl.com: pearl.com/ Sprint: thesprintgroup.com/ Timely: timely.com/ Verizon: verizon.com/
Listen in as Google's Notebook LLM provides an AI-powered conversation unpacks our December 1st, 2025 editorial examining how the holiday digital marketplace transforms into a lucrative hunting ground for device compromise and credential theft. We explore why attorneys and paralegals—trained to spot hidden clauses and anticipate risk—often abandon professional skepticism when faced with shiny gadgets bearing 70% off stickers. Our discussion arms you with actionable strategies to protect your practice, safeguard client confidentiality, and prevent the kind of security breaches that trigger bar complaints and operational shutdowns. Whether you're a solo practitioner or part of a large firm, this episode delivers the technical insights you need without the jargon. Join Google's Notebook LLM as we discuss the following three questions and more! How do bargain tech deals create hidden professional liabilities that extend far beyond wasted money, and what specific technical deficits should lawyers avoid in discount hardware? What free forensic tools can legal professionals use to distinguish genuine discounts from manipulated pricing schemes, and how do these tools apply procurement-level rigor to personal shopping decisions? Which three active scam vectors target high-value professionals during the holiday season, and what mandatory four-point protocol ensures comprehensive protection against credential theft and device compromise? In our conversation, we cover the following: [00:00:00] Welcome to TSL Labs Bonus Episode: AI-powered deep dive on holiday shopping risks [00:01:00] Why legal professionals abandon professional skepticism during holiday sales [00:02:00] The high stakes: credential theft, device compromise, and operational lockdown [00:03:00] The bargain trap: understanding technical debt in cheap vs. inexpensive hardware [00:04:00] Processor bottleneck red flags: older generation chips that consume billable time [00:05:00] Screen resolution hazards: how 1366x768 displays create genuine error risks [00:06:00] RAM deficits and security longevity: when devices become e-waste and compliance gaps [00:07:00] Introduction to forensic price tracking tools for procurement-level shopping [00:08:00] CamelCamelCamel, Keepa, and Honey: free tools that reveal true pricing history [00:09:00] Malwarebytes 2025 holiday scam report: three attack vectors targeting professionals [00:10:00] Scam #1: urgent delivery smishing attacks exploiting package expectations [00:11:00] Scam #2: malvertising minefield—when legitimate ads redirect to cloned fraud sites [00:12:00] Scam #3: gift card emergency scams posing as court clerks and government officials [00:13:00] Bonus threat: social media marketplace fraud and payment protection gaps [00:14:00] The mandatory four-point protocol for holiday shopping protection [00:15:00] Final thoughts: applying contract-reading diligence to every link you click Resources Hardware Mentioned in the Conversation Business-class Lenovo laptops: https://www.lenovo.com/business HP commercial-grade hardware: https://www.hp.com/business Dell professional series: https://www.dell.com/business Apple MacBook Pro: https://www.apple.com/macbook-pro/ Software & Cloud Services Mentioned in the Conversation CamelCamelCamel (Amazon price tracker): https://camelcamelcamel.com Keepa (Amazon price history browser extension): https://keepa.com Honey (price tracking & coupon tool): https://www.joinhoney.com Prisync (enterprise pricing solution): https://www.prisync.com Price2Spy (enterprise pricing intelligence): https://www.price2spy.com Malwarebytes (security software): https://www.malwarebytes.com























