Discover
Law://WhatsNext
Law://WhatsNext
Author: Tom Rice and Alex Herrity
Subscribed: 1Played: 30Subscribe
Share
© Tom Rice and Alex Herrity
Description
How are leading practitioners leveraging emerging technologies and ways of working to pursue their passion and objectives, and as a by product what are the implications for the future of legal practice? Let’s explore this together. What to expect:
- Focused conversations with leading practitioners; technologists and educators
- Deep dives into the intersection of law, technology, and organisational behaviour
- Practical analysis and visualisation of how AI is augmenting our potential
- Insights from adjacent industries that might inform our own
- Focused conversations with leading practitioners; technologists and educators
- Deep dives into the intersection of law, technology, and organisational behaviour
- Practical analysis and visualisation of how AI is augmenting our potential
- Insights from adjacent industries that might inform our own
37 Episodes
Reverse
🎤 This week we sit down (for our first in-person episode) with Nick West — Partner and Chief Strategy Officer at Mishcon de Reya — who has spent two decades working at the intersection of law, technology and business model innovation.Nick’s path is one of the more unusual and instructive in the industry: competition lawyer at Linklaters, strategy consultant at McKinsey, product leader at LexisNexis, Managing Director of Axiom UK, and now the person responsible for technological transformation and R&D at Mishcon. He founded MDR Lab (one of the first legal tech startup incubators) and the MDR Group (collection of specialist consultancy businesses that sit alongside but separate from the core Mischon legal practice), built one of the industry’s first in-house data science teams, and has overseen the firm’s AI adoption journey from early experimentation through to commercial platform deployment. There are few people in the legal industry who’ve thought as deeply — or as practically — about how law firms actually work and how they might need to change.The conversation is wide-ranging — we cover the full arc of Nick’s career, the evolution of innovation culture inside a law firm, how Mishcon adopted AI (and what they got wrong along the way), the productivity question everyone’s asking, what happens when clients start sending genuinely good AI-drafted documents, and the early “signals” for where the business model of law might be heading.---Connect with Nick West Partner and Chief Strategy Officer at Mishcon de Reya---If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Nick)!---For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.
🎙️This week Tom sits down with Matt Rogerson — Global Policy Director at the Financial Times and one of the more prominent and forceful voices in the UK press and publishing industry on the question of AI companies using copyrighted content without permission or payment.The timing could hardly be more significant. We recorded this conversation on the day the House of Lords Communications and Digital Committee published what may prove to be the most consequential UK report on AI and creative industries to date: AI, Copyright and the Creative Industries — an 85-page report drawing on testimony from Google, Meta, Microsoft, OpenAI and dozens of creative industry bodies, whose conclusions could not be clearer: the UK's copyright framework is not outdated, the problems stem from widespread unlicensed use, and the government should rule out a commercial text and data mining exception entirely.And just one week earlier, the FT helped launch SPUR — the Standards for Publisher Usage Rights coalition — alongside the BBC, The Guardian, Sky News and The Telegraph: a coalition not just defending the status quo, but getting on the front foot to build shared technical standards and licensing frameworks so AI developers can access quality journalism through rights-cleared channels.What provoked this conversation was a pamphlet published by Public First, a UK policy consultancy, titled "Text & Data Mining and its value to the UK economy" — which called for a broad commercial exception to UK copyright law, extending the argument to cover AI inference as well as training. Matt's reaction on LinkedIn was characteristically direct, and it got us talking.---During our conversation, Matt dismantles several of the core narratives being advanced by AI lobbyists — the anthropomorphisation of models to normalise unlicensed use; the claim that licensing infrastructure is too hard to build; and the idea that the UK must weaken copyright to remain competitive. He makes a compelling case that the real opportunity lies not in capitulating to US hyperscalers, but in building sovereign AI models with transparent training data and proper licensing — pointing to the Allen Institute, a US model co-funded by the government and Nvidia, as proof that this is already happening.Matt highlights the infrastructure already being built to support fair licensing: Microsoft's Publisher Content Marketplace, the FT's existing commercial API access, and emerging thinking from writers like Florent Daudens on what a post-browser, agentic news economy could look like. The claim that it's "too hard" for AI companies to pay for content is not just wrong — it's being actively disproved by the market.And we close on what may be the most consequential long-term argument of all: the slop spiral. If there is no economic incentive to produce high-quality journalism — because AI companies can take it for free — the supply of reliable information degrades. AI models trained on and retrieving from an increasingly polluted information environment produce worse outputs. Trust erodes. And we drift into a world where the information we consume is dependent wholly on the alignment of a particular model and the commercial interests of those administering it. Matt makes the case that secure news and information supply chains could become a national security issue if this dynamic starts to accelerate.---If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Matt)!---For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.
We sit down with Rok Popov Ledinski — an independent legal AI and data consultant whose background spans high-security enterprise engineering through to advising law firms on their AI and security strategy. Our initial interest in Rok's work was sparked by his YouTube channel, where he's been producing sharp, accessible breakdowns of the real risks underpinning today's AI tools.Within minutes, we're into a forensic dissection of Anthropic's Claude Cowork — the agentic tool pitched at non-developers that launched earlier this year. Rok walks us through the contradictions in Anthropic's own technical documentation: a tool demonstrated by its creators as a way to organise your desktop, while the same support pages advise against granting it access to sensitive local files. A tool marketed for running tasks autonomously in the background — while its activity isn't captured by audit logs. A tool whose safety guidance asks users to watch for "suspicious actions that may indicate prompt injections" — aimed at an audience that, as Rok points out, has largely never heard of prompt injections. Rok explains, in terms accessible to non-technical listeners, how hidden instructions embedded in an innocuous document can hijack an AI agent into exfiltrating sensitive client data. His hypothetical attack vector for law firms is disarmingly simple: find lawyers on LinkedIn who are openly using Cowork, send a document to their publicly available email address containing concealed instructions, and let the agent do the rest.But this isn't an anti-AI conversation. Rok is emphatic that these tools should be used — just not naively. Drawing on enterprise security frameworks from companies like Cisco, he advocates for a practical middle ground: map what your AI has access to, create sanitised copies of sensitive folders, scope permissions tightly, vet your MCP servers and plugins, and understand (physically, not just contractually) how data flows through your systems.Key TakeawaysThe Cowork Paradox: Anthropic's own documentation reveals a tension between how Cowork is marketed (autonomous, background task execution) and how it should be used (limited permissions, no sensitive files, manual monitoring for prompt injections). Security attacks are now a "When," Not an "If": Unlike traditional cybersecurity breaches, prompt injection attacks exploit a fundamental limitation of large language models — they can't distinguish instructions from data. Research shows success rates as high as 90% for some proprietary LLMs. Claude is among the more resistant, but not immune.Practical Security for Legal Teams: Rok's actionable advice for in-house teams and law firms includes: creating clean data environments separate from originals; using self-hostable workflow tools like n8n; scoping AI permissions to the minimum necessary; and conducting genuine due diligence on every plugin and MCP server before connecting it to your systems.Key ReferencesRok's YouTube Channel: where our interest in Rok's work began, and a recommended follow for anyone wanting to stay across the security dimensions of legal AI adoptionRok's LinkedIn — he hosts weekly live sessions every Saturday with a security expert specialising in air-gapped, offline AI deployments in regulated industriesThe Art of Modern Legal Warfare — Rok co authors with a former guest and friend of the show Anna Guo and Sakshi Udeshi a series of vulnerability types specific to legal AI use cases.If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Rok)!
🎙️Alex and Tom step aside for this one — handing the mic to their friend Catie Sheret (General Counsel at Cambridge University Press & Assessment), who hosts a rich three-way conversation with Oliver Patel (Head of Enterprise AI Governance at AstraZeneca) and Peter Lee (Partner at Simmons & Simmons). Three very different vantage points — converging on the same question: how do you actually make AI governance work in practice?What begins with a definitional exercise (what is AI governance, anyway?) quickly evolves. Oliver draws a sharp line between AI ethics, responsible AI, AI governance and AI safety as related but distinct disciplines — and makes a passionate case that governance is fundamentally change management, not compliance theatre. Peter describes the "golden thread" he sees in the best organisations: corporate philosophy flowing from the boardroom right down into the tools people use every day. Catie grounds everything in context — arguing that your principles only stick when they're anchored to what your organisation actually does: content IP at Cambridge, medical ethics at AstraZeneca etc.The conversation builds through the practical mechanics — use case assessment, vendor oversight, committee structures, crisis preparation — before arriving at the question everyone's wrestling with: agentic AI. Peter frames it as a mindset shift from "can we trust the output?" to "what actions can this system initiate?" Oliver goes further: the fundamental logic of agentic AI, he argues, is to take the human out of the loop — and organisations need to confront that honestly rather than pretending otherwise.There's a wonderful thread on human flourishing running throughout — Peter's insistence that philosophers have never been more important, Oliver's pride in AstraZeneca's "Thriving in the Age of AI" literacy programme, and a closing round of book recommendations that ranges from Richard Susskind's How to Think About AI to Jenny O'Dell's How to Do Nothing (Oliver's brilliantly contrarian pick about the importance of stepping away from screens entirely) to Governing the Machine by Ray Eitel-Porter, Paul Dongha, Miriam Vogel.It's a masterclass in how to think about governance as something that enables rather than constrains — hosted with warmth and real expertise by Catie.If you enjoyed this episode, please do share it with another friend, team or community who might also enjoy it! Please do let us know what resonated (by comment) and rate the show (if you haven't already)! We appreciate your time, attention and support!For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) insights from leading practitioners, technologists, and educators; (ii) deep dives into the intersection of law, technology, and organisational behaviour; and (iii) practical analysis and visualisation of how AI is augmenting our potential.
The vibe coding conversation in legal has gone full culture war: one side says they've built a billion-dollar startup in 10 minutes, the other says don't bother. The truth — as usual — is far more interesting than either extreme.🎙️This week we sit down with Chris Bridges (Co-Founder & COO, Tacit Legal) and Matt Pollins (Co-Founder & CPO, Lupl) — two legal technologists who live in the same small town in West Sussex and who've channelled that proximity into building vibecode.law, an open-source platform where the legal community can share, discover and upvote vibe-coded legal tech projects.The platform launched just over a week before we recorded and already had 18 projects — from a SaaS inflation calculator for contract lawyers to a Harvey for Mongolian law to a tool that unlocks track changes when a passive-aggressive opposing lawyer has locked them down.During our chat, we explore:Why vibe coding's real value is compressing the feedback loop between idea and prototype — not replacing developers The structural gap: how 25 years of developer tooling (linting, testing, documentation, standards) gives engineering focussed AI tools a head start that legal tech can't shortcut Why the adversarial nature of law makes standardisation fundamentally harder than in software vibecode.law: what it is, the projects landing on it, and the product thinking behind building a two-sided community Responsible vibe coding and why we're probably 6–12 months from a data exposure incident The T-shaped lawyer: curiosity as the defining skill for the next generationConnect with our guests:Chris Bridges — tacit.legal | author of When will legal vibe like codeMatt Pollins — agents.law | lupl.comCheck out vibecode.law to explore or submit your own projects.---If you enjoyed this episode, please like, subscribe, comment, and share! For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/
🎙️ This week we sit down with Artur Serov — a Senior Commercial Counsel working in-house across corporate, commercial, and AI compliance — who has been quietly vibe coding legal tech solutions that rival features in commercial platforms. This is a practical, how-I-did-it episode. Artur walks us through his journey from first principles — the failed early experiments, the tools that unlocked progress, and the specific steps any curious lawyer could follow to start building. Artur shares his screen during our conversation to demo a Word add-in with features he couldn't find in commercial legal tech (party-aware context, risk appetite dials, AI-powered negotiation prep), and previews a more ambitious workspace prototype where AI retains memory across an entire transaction lifecycle. Since publishing this prototype has evolved, and you can read more about that here. Artur is candid about what's now possible: with Claude Opus 4.5 and Gemini 3, self-built solutions can get remarkably close to enterprise-grade. But he's equally honest about the remaining hurdles — deployment, maintenance, security — and his belief that a growing community of "vibe lawyers" will help solve them together.---What you might take from this conversation:The First Principles Path to Technical Fluency — How Artur went from zero coding experience to working prototypes, using Claude as a teacher and Google Antigravity as his development environmentWhat's Missing from Commercial Legal Tech — Why context is the killer feature, and how Artur built deal-aware AI that knows who you represent, what you're negotiating, and what risks you're willing to takeThe Workspace Vision — A prototype where AI memory persists across NDAs, partnership agreements, and every document in a transaction — with your playbooks and policies embedded as reference materialsWhy Building Makes You Better at Everything Else — From vendor negotiations to IT collaboration, how technical fluency transforms your effectiveness as in-house counselHow to Get Started — Artur's practical advice: a Claude subscription, Google Antigravity, and the willingness to ask "how do I do this?"---Connect with Artur: LinkedIn | Github---If you found this episode interesting, please tell us and do share it with a friend, colleague or community who might take something from it! For more, head to lawwhatsnext.substack.com for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis of how AI is augmenting our potential.
We sit down with Anna Guo — a Singapore-based lawyer, startup advisor, and founder of LegalBenchmarks.ai — who has quietly built one of the most rigorous practitioner-driven evaluation frameworks for legal AI tools in the industry. Her community now spans close to 900 legal and AI professionals. Her research has produced findings that challenge industry assumptions: that legal-specific AI tools don't always outperform general-purpose models, that accuracy isn't actually the top driver of lawyer adoption, and that in some drafting tasks, AI is already matching or exceeding human reliability.This is a watch-don't-only-listen episode. Anna shares her screen throughout — running us through a live, double-blind benchmarking exercise where we rank outputs from legal AI, general-purpose AI, and human lawyers without knowing which is which. She also demonstrates how prompt injection attacks can bypass AI guardrails using techniques as simple as low-resource languages (Vietnamese or ASCII code?), surfacing security risks that become particularly acute as we move closer toward widespread agentic AI adoption.What You'll Learn:The Three Dimensions of Tool Evaluation — Why measuring accuracy alone misses the point, and how Anna assesses output reliability, output usefulness, and platform workflow support as distinct layersWhat Actually Drives Adoption — Survey data revealing that lawyers prioritise context management and verification over raw accuracy when choosing AI toolsWhere Humans Still Win — High-judgment, context-sparse tasks requiring commercial reasoning remain firmly in human territory; routine, context-complete work is where AI excelsPrompt Injection in Practice — Live demonstrations of how attackers can trick AI models into revealing harmful information using low-resource languages and clever framing---Connect with Anna: LinkedIn | LegalBenchmarks.ai---If you found this episode interesting, please tell us and do share it with a friend, colleague or community who might take something from it! For more, head to lawwhatsnext.substack.com for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis of how AI is augmenting our potential.
Season 2 is here.In our opener, we sit down with Anson Lai — commercial counsel by day, relentless tinkerer by night — who walks us through how he built and published a document review tool as a Microsoft Word add-in that rivals offerings from legal AI startups raising hundreds of millions.The kicker? He did it in weeks. And he's giving it away.This isn't theoretical. Anson shares his screen, shows us the tool live, and opens the hood on what makes it work. No mystique. No black box. Just a lawyer who got tired of copy-pasting contracts into ChatGPT tabs and decided to do something about it.What You'll Learn:"Vibe Coding" — How conversing with AI tools (not just instructing them) shaped better technical decisions"Bring Your Own Key" Architecture — Why your documents going straight to Google's API (with no middleman) actually mattersWhere the Real Moat Lives — If building software now takes hours not months, differentiation lies in the refinements — the nested lists, tables, and edge cases where most AI tools quietly fall apartConnect with Anson: LinkedIn | GitHub (Open Source Project)If you found this episode interesting, please like, subscribe, comment, and share! For more, head to lawwhatsnext.substack.com for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis of how AI is augmenting our potential
Welcome to Law://WhatsNext - the show where we catch up with leading practitioners (lawyers; technologists; educators and more) who are leveraging emerging technologies to pursue their passion and objectives, and as a by product we get nerdy trying to understand the implications for the future of legal practice (and more broadly, knowledge work). To keep up with the pace of change and developments subscribe to this channel or to our newsletter at: https://lawwhatsnext.substack.com/----In this episode, we've distilled a year of extraordinary dialogue into one 20-minute highlights reel. We've spent 2025 in conversation with legal industry pioneers — the general counsels, technologists, and educators redefining how law is practised, learned, and delivered. These are some of our standout moments from a series of compelling global conversations. What made the reel (this could honestly be a multi-part series):Part 1: Hype vs. Reality — Is AI progress real?Kevin Cohn (the soon to be CEO of Brightflag) provokes that the trough of disillusionment is coming but that shouldn't blight the reality that the value in the skills and expertise we used to highly prize are dramatically eroding Part 2: Agency, authenticity & trustDana Rao (the former GC & Chief Trust Officer at Adobe) demonstrates that we can be the agents (rather than mere subjects) of positive change, and we loved learning more about the work he and his team at Adobe invested to build the Content Authenticity Initiative (to counter the ever increasing proliferation of deepfakes)Part 3: Leading in disruptive timesJessica Block (EVP at Factor) used a recent read (Notes on Complexity by Neil Theise) as the lens through which she explained the importance of cultivating the right environment (over systems) for the emergent properties of transformational change to "bubble" up. Part 4: Evaluating what's actually workingSigge Labor (President at Legora) explained for us the work that Legora performs to understand frontier model performance and how they react to new developments and assess leaps in capabilities. We anticipate that in 2026 more and more legal teams and firms will invest in their evaluation capabilities, and this conversation (that accompanied the release of GPT5 in the summer) is one to check out if you haven't already. Part 5: The skills we might loseDan Hunter (Executive Dean, The Dickson Poon School of Law, King's College London) talked of the "terrifying bind" we encounter as we offload more and more cognitive work to compute - the work may get easier and more efficient but our cognitive development doesn't replicate (in terms of resilience) the old training training pathway. He has immediate concerns in the classroom and anticipates a coming gap in law firm talent pipelines.These are just glimpses. Check out our Spotify, Apple Podcasts, or Substack pages for the full conversations. Thank you for listening, supporting, and championing the show. We wish you a happy new year — Series 2 is coming soon 👀
We're joined by Peter Duffy for our quarterly ritual of dissecting the big headlines of Peter's popular Legal Tech Trends newsletter and ruminating on their potential implications for legal service delivery. Peter returns wide eyed and optimistic fresh off some time in the US, where he enjoyed attending TLTF in Austin.What gets covered:The eternal "legal-specific vs. frontier model" debate — With Gemini 3 dropping and capabilities proliferating into vertical spaces, Peter weighs in on whether specialised legal AI still has an edge.PE is coming for BigLaw — McDermott exploring MSO structures to let private equity in; 20% of UK firms eyeing PE money; we explore the uncomfortable questions: (i) does outside capital corrupt lawyer independence? (ii) does PE change the fabric of the firm and its operation? The vibes have shifted — Wild stat emanating from the PWC Law Firm Survey - Top-100 law firms expecting AI to boost revenue dropped from 69% (2023) to 31% (2025). Meanwhile, in-house teams are having their main character moment with a 24-point jump in AI optimism. Is this gap telling? Product chaos continues — Norm AI spinning up an actual law firm (!), Crosby raising $20M for Slack-native contract review, Legora's client portal coming Q1 2026, and Linklaters designating 20 "AI lawyers" to build workflows.Listen if: You’re worried your LinkedIn feed isn’t giving you enough legal technology news 😂 OR maybe you’re curious to experiment to see what else is going on out there (beyond this platform)? Rate, subscribe, comment, and share if you enjoyed this chat with Peter!For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) discussions with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
Hot off the heels of breaking legal LinkedIn last week, we caught up with Jamie Tso - a Hong Kong-based lawyer who's been building-in-public and sparking conversations across the legal community with his viral AI creations. This is a watch (don't only listen) episode. Jamie screen-shares his way through Google AI Studio, live-coding lightweight versions of legal tech tools we all know. Jamie walks through his "SpellPage" contract editor (inspired by a novel-writing app, naturally), demonstrates real-time AI-powered redlining, and casually drops the concept of an open-source "legal AI operating system" built from first-principles that could democratise access to common technology workflows we are building to support common practices across legal. His philosophy? The barrier to entry is now so low that sophisticated AI tools "should be free, more or less."Key moments:Live demo of AI-powered contract editing with natural language instructionsWhy Google AI Studio is the ultimate one-stop shop (native API keys, version control, GitHub integration, no coding required)The shift from chatting with AI → AI using tools → AI spinning up mini-apps on the goJamie's vision for consolidating legal workflows into reusable, customisable modulesMust-read context: Check out Jamie's viral posts that sparked this conversation:The contract editor build"Gemini 3 is basically AGI at this point"This is what building in the age of AI looks like - experimental, exhilarating, unnerving and transformative. If you enjoyed this episode with Jamie, please like, subscribe, comment, and share!For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
Alex and Tom step aside for this one—no hosts, no scripts—just Andy Cooke (CLO of Perk) and Sam Ross (CLO of Remote) in conversation. What begins with nursery vomiting bugs quickly evolves into a refreshingly honest exploration of what it means to lead a legal function in disruptive technology companies. They dissect the tension between being "bold, brief, and gone" versus staying in the room to build genuine relationships, challenge the limiting "trusted advisor" archetype, and wrestle with when precision matters less than context and authentic communication. Andy and Sam don't just theorise—they get personal about the moments that test you: deciding whether someone's lying on an expense claim, navigating board dynamics when you're the least financially fluent person in the room, and maintaining ethical standards when the stakes are existential. But this isn't a heavy-handed meditation on professional responsibility. The conversation crackles with levity and self-awareness - from Sam's admission that he deliberately uses humour to "pierce through" hierarchy, to their shared recognition that being fallible and human is part of doing the job right. Both find genuine joy in what they do, drawing energy from learning from others and building networks that pay dividends years later. It's a masterclass in thoughtful leadership wrapped in the warmth of two friends who clearly respect each other's craft - and aren't afraid to acknowledge when they get it wrong. If you enjoyed this episode, please like, subscribe, comment, and share! It gives us a warm fuzzy feeling and helps make our podcast more discoverable to other podcast aficionados! For more thought-provoking conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; and (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
Creating content has never been easier. With both LLMs and world models (Sora 2, Veo 3, Marble) the fidelity of what we can produce at the tip of a prompt is getting genuinely scary. Guy Shahar is the CEO and founder of Blee, a Y Combinator-backed AI content compliance platform that helps companies review and oversee marketing materials at scale. Before founding Blee nearly four years ago, Guy led marketing operations at Adobe for five years, and he has been witnessing firsthand the explosion of AI-generated content and deliberating on the implications. We sit down with Guy in this short conversation to discuss: (1) the rising proliferation of AI generated content; (2) the cyber-like threat of deepfakes and bad actor impersonation; and (3) the new opportunities large language and world models present for some of the world's largest brands in how they generate and manage their production of compelling content.Key TakeawaysThe "Content Tsunami" is here and it's only getting bigger - Content creation has exploded with AI, fundamentally changing the speed and volume at which companies can produce marketing materials. What used to take weeks now happens in hours. Guy calls this the "content tsunami" - a relentless wave of content being generated across all digital channels. But the gap between how fast content can be created and how fast it can be safely approved is widening, creating significant risk exposure for companies and their brands.Deepfakes aren't just a detection problem - they're a trust problem - One real danger of deepfakes isn't just that bad actors can create convincing fake content - it's that they're eroding trust in everything we see online. The recent deepfake of Irish presidential candidate Catherine Connolly which went viral in Ireland, which falsely showed her withdrawing from the race just days before the election and remained live for 12 hours, demonstrates how sophisticated and damaging this content has become. AI compliance creates new opportunities for how teams work - While AI-generated content creates new risks, it also opens unprecedented opportunities to transform workflows and team structures. Guy promotes the potential for companies to rethink their entire "content supply chain" - testing 50 or 100 versions of marketing materials instead of just two, delivering hyper-personalised content at scale, and breaking down silos between marketing, legal, GTM and compliance teams. Key References from Our ConversationCatherine Connolly Deepfake Incident: An AI-generated video falsely depicting Irish presidential candidate Catherine Connolly withdrawing from the race surfaced just days before the October 2025 election, viewed nearly 30,000 times over 12 hours before Meta removed it - a stark example of how deepfakes can threaten democratic processes and why rapid content monitoring matters.Content Authenticity Initiative (CAI): an open standard verification system with over 900 member companies working to authenticate digital content and combat deepfakes through content credentials and metadata tracking.Dana Rao: Adobe's former General Counsel and Chief Trust Officer, is mentioned for his perspective on deepfakes and the transition from trying to detect fakes to proving authenticity - Dana appeared on an earlier episode of Law://WhatsNext which you can access here. If you found this episode interesting, please like, subscribe, comment, and share!For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
We sit down with Kevin Cohn, Chief Customer Officer at Brightflag, who occupies one of the most unique vantage points in legal services – the interface between corporate legal departments and their outside counsel. We’re sure any AI start up would pay a premium to understand the work that is going to law firms and the value and time it takes to deliver that, right? By processing billions of dollars in legal invoices, Kevin and his team have unprecedented visibility to spot macro trends – from law firm partner utilisation patterns to staffing changes. In this lively but familiar conversation (we’ve each all known one another for a few years) Kevin reveals some emerging trends very relevant to the technological revolution we are all experiencing. Beyond the predictable conversation about rising law firm rates, Kevin shares two interesting developments the Brightflag team are noticing: increased partner utilisation (which might actually be good news if total hours are decreasing?)something more troubling – what Kevin diplomatically calls "not the most above board" AI-enabled billing practices, with ever more invoices showing suspicious six-minute increments.We also talk about the evolution of skills and relationships in an era when both clients and counsel are being shaped by automation and analytics.Kevin is also our first Law://WhatsNext guest to have an AI version of himself shipped and deployed to give Brightflag customers on-demand access to his expertise on legal operations and spend management. While Kevin Clone can handle questions about invoice review and matter management workflows with ease, we discover its limits when we request an Italian wine pairing. The Clone politely deflects: "I'm here to focus on legal operations and Brightflag." Some things simply can't be replicated by AI. The real Kevin remains irreplaceable.Key ReferencesBrightflag LinkedIn Post — Introducing Kevin CloneAsk Kevin Clone a Question — BrightflagIf you enjoyed this episode, please like, subscribe, comment, and share! It helps more people discover conversations like this. For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
In a twist to what has probably become our “normal” programming, this episode features just the two of us in conversation. We explore the implications of technological progress - from the shift we’re contemplating from AI-infused linear workflows to fully agentic ones, to the risks and vulnerabilities baked into today’s LLM architectures. Essentially, it’s the kind of discussion we often have offline, brought into the open.The following pieces ground our discussion:From linear AI-infused workflows to fully agentic - new skills and orchestration challengesLegal AI’s Future Is Railroads, But Speeding Up Canals Still Makes Sense For Now by Alex Herrity The Problem with Agentic AI in 2025 by Sangeet Paul Choudary - The original article featuring the canals vs railroads analogy that inspired Alex's piecePrompt Injection Attacks & AI Governance:The Lethal Trifecta for AI Agents by Simon Willison - defining the three dangerous elements that enable prompt injection attacksPrompt Injections as Far as the Eye Can See by Simon Willison - Johann Rehberger's "Month of AI Bugs" research demonstrating widespread prompt injection vulnerabilitiesI Accidentally Became a ChatGPT Surveillance Node by Juliana Jackson - The article Tom and Alex discuss revealing OpenAI's buggy infrastructure leaking private conversationsChatGPT Scrapes Google and Leaks Your Prompts - Quantable Analytics - Technical breakdown of the ChatGPT prompt leakage issueIf you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential
This week we sit down with Memme Onwudiwe for a conversation that starts in a Harvard Law classroom - transitions to his building an AI company before ChatGPT was a thing - and ends up in outer space 🚀Memme co-founded Evisort while at Harvard Law School in 2016, building AI-powered contract intelligence from the Harvard Innovation Lab years before it became mainstream. Workday acquired the company in October 2024, where Memme now serves as an AI Evangelist. Memme returns to Harvard each spring to teach legal entrepreneurship alongside co-founder Jerry Ting, and he’s a published space law scholar whose paper “Africa and the Artemis Accords” examines how emerging nations can secure their stake in the space economy.Key ReferencesAcademic ResearchAfrica and the Artemis Accords — Memme Onwudiwe & Kwame Newton, New Space (2021)Legal FrameworksArtemis Accords — Non-binding bilateral space exploration principles (2020, 55+ signatories)Outer Space Treaty — Foundational UN space law treaty (1967)Moon Agreement — “Common heritage” framework (1979, 18 signatories)OrganizationsHarvard Innovation Labs — Where Evisort was foundedCLOC — Corporate Legal Operations Consortium (6,300+ members)Space Beach Law Lab — Annual space law conference, Feb 24-26, 2026, Long BeachCorporateWorkday-Evisort Acquisition — ~$310M, closed Oct 2024If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com.
Nicole Braddick needs no introduction - but if you had to rush one for the purposes of publishing a podcast 👀 you might say she’s the Global Head of Innovation at Factor Law, following the February 2025 acquisition of her company, Theory & Principle, where she served as CEO and Founder. A former trial lawyer who transitioned into legal tech 15 years ago, Nicole has been one of the industry's most persistent advocates for bringing modern design and development practices to legal technology. Her team has worked with leading law firms, legal tech companies, corporate legal departments, non-profits and public sector organisations to build custom solutions focused on user experience - transforming an industry that, when she started, was "purely functional" and "engineering-led" into one where good design is finally recognised as essential.We get into all of that and more during our discussion, and lean in hard for Nicole’s system wide view and perspective on what’s happening at present.. Key TakeawaysNicole advocates that the calculation around build versus buy has fundamentally changed with generative AI. She argues that corporate legal departments should consider getting enterprise accounts with providers like Anthropic or OpenAI and should be building their muscles for developing internal customised solutions rather than defaulting to SaaS products. The proliferation of chatbots in law was appropriate when everyone was experimenting with generative AI, but Nicole believes the industry has overcorrected. Chat interfaces place enormous cognitive load on users who must craft effective prompts, whereas traditional point-and-click UIs make things easier by guiding users through structured workflows. Nicole sees the future as lying in hybrid experiences.While the AI industry races toward autonomous agents, Nicole sounds a cautionary note for legal applications. The entire value proposition of agents is "getting rid of control"- but lawyers have to wrestle with their ethical obligations and duties to control, to check, and to approve. Nicole sees this as a fascinating design challenge: where previous UX best practices focused on removing friction to create seamless experiences, Nicole and her team are actively considering where they must now strategically add friction and interruption points, believing the goal is to prevent lawyers from blindly clicking "yes, yes, yes" while avoiding so much friction that they abandon the tool. If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis and visualisation of how AI is augmenting our potential
We sit down with Stephan Breidenbach, co-founder of the Rulemapping Group and a German scholar who's been quietly revolutionising how we think about law, technology, and democratic governance since the early 2000s.What started as a teaching tool to help law students visualise complex legal reasoning has evolved into something far more ambitious: a comprehensive system for transforming laws into executable code that maintains human oversight while dramatically improving access to justice.Stephan's present work spans three critical areas: decision automation (turning legal rules into fast, transparent systems), rule-based AI (supporting human lawyers with explainable reasoning), and law as code (drafting legislation that's both human and machine-readable from day one).Some of our highlights from the conversation:The Transparency Imperative: "I would never trust an LLM with a legal process because it's confabulating" Stephan declares, highlighting why the Rulemapping approach prioritises explainable AI over black-box solutions. Their system lets human decision-makers see exactly how the AI reached its conclusions – a "zoom in, zoom out" process that mirrors how lawyers naturally think.Democracy-First Technology: Unlike Silicon Valley's "move fast and break things" mentality, Stephan advocates for keeping humans in the loop even when AI becomes more accurate: "I think it's very important for trust in the legal system and therefore in a democratic system that there are human beings, even if they make worse decisions."Access to Justice at Scale: Through real-world deployments like processing 500,000 diesel emission scandal cases and serving as Europe's first certified Digital Services Act dispute resolution body, Rulemapping demonstrates how thoughtful automation can make legal systems accessible to everyone, not just those who can afford lawyers.We also explore the behavioural risks of over-relying on automated systems, the potential for "law as code" to improve democratic participation, and Stephan's vision of embedded law that serves citizens rather than bureaucracy.If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for more of the same.
We catch up with Ben Martin, the former Director of Privacy at Trustpilot and author of "GDPR for Startups," who's currently living his best life somewhere in the Estonian wilderness with a camper van, fishing rod, and blessed freedom from subject access requests. Having built privacy programs at high-growth companies like Trustpilot, Ovo Energy, and King Digital Entertainment, Ben brings a refreshingly practical perspective to privacy law that goes way beyond compliance theatre.From his sabbatical perch in the Nordics, he reflects on everything from why GDPR hasn't quite delivered its promised outcomes to how privacy lawyers are uniquely positioned to lead AI governance.What We Cover:The Sabbatical Chronicles: Ben's epic Nordic adventure and why stepping away from work sometimes gives you the clearest perspective on itPrivacy Program Building: Moving from compliance theatre to business enablement, and why good privacy programs start with genuine curiosity about productsGDPR Reality Check: Why the regulation might not have quite yet delivered its intended outcomes and the types of privacy lawyers and approaches Ben sees in practiceAI Governance Evolution: How privacy professionals are naturally stepping into AI oversight roles and what new skills they need to developTechnical Literacy: The importance of understanding what your business actually builds and Ben's practical approach to learning complex technical conceptsKey References:GDPR for Startups - Ben's practical guide to building privacy programs in high-growth companiesField Fisher Privacy Newsletter - Legal developments summary that Ben recommends for staying currentHard Fork Podcast - Ben's go-to for broad tech and AI developmentsLovable - The AI coding platform Ben's been experimenting with to build his habit tracker (and recruit his girlfriend as user number one)If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.
This week we sat down with Dan Hunter, Executive Dean of the Dickson Poon School of Law at King's College London and serial legal tech entrepreneur. Dan's journey spans academia across three continents, four successful startups (including his current venture GraceView), and decades of research on the cognitive science of legal reasoning. As both an educator training the next generation of lawyers and an entrepreneur building AI-powered legal solutions, he offers a unique dual perspective on the transformation underway across knowledge work.Key Takeaways1. The Learning Paradox: AI Makes Us Feel Smarter While Making Us DumberStudents using large language models consistently perform better on assignments and believe they're learning more - but when the AI is removed, they've retained virtually nothing. This creates a dangerous illusion of competence (sycophantic models propagate this!) that law schools and firms must address through new assessment methods and training approaches.2. We're Heading Toward a "Barbell" Legal ProfessionTraditional pyramid law firm structures will collapse as AI automates much of the work. Dan believes the future involves senior lawyers managing client relationships at the top, AI agents handling routine tasks in the middle, and "legal engineers" swarming around validating AI outputs and steering the models.3. Entry-Level Legal Jobs Are Already DisappearingWe discuss the recent Stanford research "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence" by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, Stanford Digital Economy Lab (2025) - The landmark study using ADP payroll data showing 13% employment decline for young workers in AI-exposed occupations.Interested in more?If you found this episode interesting, please like, subscribe to the show, comment, and share! For more thought-provoking content at the intersection of law and technology, head to our Law://WhatsNext home for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis and visualisation of how AI is augmenting our potential























