DiscoverLaw://WhatsNext
Law://WhatsNext
Claim Ownership

Law://WhatsNext

Author: Tom Rice and Alex Herrity

Subscribed: 1Played: 27
Share

Description

How are leading practitioners leveraging emerging technologies and ways of working to pursue their passion and objectives, and as a by product what are the implications for the future of legal practice? Let’s explore this together. What to expect:

- Focused conversations with leading practitioners; technologists and educators
- Deep dives into the intersection of law, technology, and organisational behaviour
- Practical analysis and visualisation of how AI is augmenting our potential
- Insights from adjacent industries that might inform our own
33 Episodes
Reverse
The vibe coding conversation in legal has gone full culture war: one side says they've built a billion-dollar startup in 10 minutes, the other says don't bother. The truth — as usual — is far more interesting than either extreme.🎙️This week we sit down with Chris Bridges (Co-Founder & COO, Tacit Legal) and Matt Pollins (Co-Founder & CPO, Lupl) — two legal technologists who live in the same small town in West Sussex and who've channelled that proximity into building vibecode.law, an open-source platform where the legal community can share, discover and upvote vibe-coded legal tech projects.The platform launched just over a week before we recorded and already had 18 projects — from a SaaS inflation calculator for contract lawyers to a Harvey for Mongolian law to a tool that unlocks track changes when a passive-aggressive opposing lawyer has locked them down.During our chat, we explore:Why vibe coding's real value is compressing the feedback loop between idea and prototype — not replacing developers The structural gap: how 25 years of developer tooling (linting, testing, documentation, standards) gives engineering focussed AI tools a head start that legal tech can't shortcut Why the adversarial nature of law makes standardisation fundamentally harder than in software vibecode.law: what it is, the projects landing on it, and the product thinking behind building a two-sided community Responsible vibe coding and why we're probably 6–12 months from a data exposure incident The T-shaped lawyer: curiosity as the defining skill for the next generationConnect with our guests:Chris Bridges — tacit.legal | author of When will legal vibe like codeMatt Pollins — agents.law | lupl.comCheck out vibecode.law to explore or submit your own projects.---If you enjoyed this episode, please like, subscribe, comment, and share! For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/
🎙️ This week we sit down with Artur Serov — a Senior Commercial Counsel working in-house across corporate, commercial, and AI compliance — who has been quietly vibe coding legal tech solutions that rival features in commercial platforms.  This is a practical, how-I-did-it episode. Artur walks us through his journey from first principles — the failed early experiments, the tools that unlocked progress, and the specific steps any curious lawyer could follow to start building.  Artur shares his screen during our conversation to demo a Word add-in with features he couldn't find in commercial legal tech (party-aware context, risk appetite dials, AI-powered negotiation prep), and previews a more ambitious workspace prototype where AI retains memory across an entire transaction lifecycle.  Since publishing this prototype has evolved, and you can read more about that here.  Artur is candid about what's now possible: with Claude Opus 4.5 and Gemini 3, self-built solutions can get remarkably close to enterprise-grade. But he's equally honest about the remaining hurdles — deployment, maintenance, security — and his belief that a growing community of "vibe lawyers" will help solve them together.---What you might take from this conversation:The First Principles Path to Technical Fluency — How Artur went from zero coding experience to working prototypes, using Claude as a teacher and Google Antigravity as his development environmentWhat's Missing from Commercial Legal Tech — Why context is the killer feature, and how Artur built deal-aware AI that knows who you represent, what you're negotiating, and what risks you're willing to takeThe Workspace Vision — A prototype where AI memory persists across NDAs, partnership agreements, and every document in a transaction — with your playbooks and policies embedded as reference materialsWhy Building Makes You Better at Everything Else — From vendor negotiations to IT collaboration, how technical fluency transforms your effectiveness as in-house counselHow to Get Started — Artur's practical advice: a Claude subscription, Google Antigravity, and the willingness to ask "how do I do this?"---Connect with Artur: LinkedIn | Github---If you found this episode interesting, please tell us and do share it with a friend, colleague or community who might take something from it! For more, head to⁠ lawwhatsnext.substack.com⁠ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis of how AI is augmenting our potential.
We sit down with Anna Guo — a Singapore-based lawyer, startup advisor, and founder of LegalBenchmarks.ai — who has quietly built one of the most rigorous practitioner-driven evaluation frameworks for legal AI tools in the industry. Her community now spans close to 900 legal and AI professionals. Her research has produced findings that challenge industry assumptions: that legal-specific AI tools don't always outperform general-purpose models, that accuracy isn't actually the top driver of lawyer adoption, and that in some drafting tasks, AI is already matching or exceeding human reliability.This is a watch-don't-only-listen episode. Anna shares her screen throughout — running us through a live, double-blind benchmarking exercise where we rank outputs from legal AI, general-purpose AI, and human lawyers without knowing which is which. She also demonstrates how prompt injection attacks can bypass AI guardrails using techniques as simple as low-resource languages (Vietnamese or ASCII code?), surfacing security risks that become particularly acute as we move closer toward widespread agentic AI adoption.What You'll Learn:The Three Dimensions of Tool Evaluation — Why measuring accuracy alone misses the point, and how Anna assesses output reliability, output usefulness, and platform workflow support as distinct layersWhat Actually Drives Adoption — Survey data revealing that lawyers prioritise context management and verification over raw accuracy when choosing AI toolsWhere Humans Still Win — High-judgment, context-sparse tasks requiring commercial reasoning remain firmly in human territory; routine, context-complete work is where AI excelsPrompt Injection in Practice — Live demonstrations of how attackers can trick AI models into revealing harmful information using low-resource languages and clever framing---Connect with Anna: LinkedIn | LegalBenchmarks.ai---If you found this episode interesting, please tell us and do share it with a friend, colleague or community who might take something from it! For more, head to lawwhatsnext.substack.com for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis of how AI is augmenting our potential.
Season 2 is here.In our opener, we sit down with Anson Lai — commercial counsel by day, relentless tinkerer by night — who walks us through how he built and published a document review tool as a Microsoft Word add-in that rivals offerings from legal AI startups raising hundreds of millions.The kicker? He did it in weeks. And he's giving it away.This isn't theoretical. Anson shares his screen, shows us the tool live, and opens the hood on what makes it work. No mystique. No black box. Just a lawyer who got tired of copy-pasting contracts into ChatGPT tabs and decided to do something about it.What You'll Learn:"Vibe Coding" — How conversing with AI tools (not just instructing them) shaped better technical decisions"Bring Your Own Key" Architecture — Why your documents going straight to Google's API (with no middleman) actually mattersWhere the Real Moat Lives — If building software now takes hours not months, differentiation lies in the refinements — the nested lists, tables, and edge cases where most AI tools quietly fall apartConnect with Anson: LinkedIn | GitHub (Open Source Project)If you found this episode interesting, please like, subscribe, comment, and share! For more, head to lawwhatsnext.substack.com for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis of how AI is augmenting our potential
Welcome to Law://WhatsNext - the show where we catch up with leading practitioners (lawyers; technologists; educators and more) who are leveraging emerging technologies to pursue their passion and objectives, and as a by product we get nerdy trying to understand the implications for the future of legal practice (and more broadly, knowledge work). To keep up with the pace of change and developments subscribe to this channel or to our newsletter at: https://lawwhatsnext.substack.com/----In this episode, we've distilled a year of extraordinary dialogue into one 20-minute highlights reel. We've spent 2025 in conversation with legal industry pioneers — the general counsels, technologists, and educators redefining how law is practised, learned, and delivered. These are some of our standout moments from a series of compelling global conversations. What made the reel (this could honestly be a multi-part series):Part 1: Hype vs. Reality — Is AI progress real?Kevin Cohn (the soon to be CEO of Brightflag) provokes that the trough of disillusionment is coming but that shouldn't blight the reality that the value in the skills and expertise we used to highly prize are dramatically eroding Part 2: Agency, authenticity & trustDana Rao (the former GC & Chief Trust Officer at Adobe) demonstrates that we can be the agents (rather than mere subjects) of positive change, and we loved learning more about the work he and his team at Adobe invested to build the Content Authenticity Initiative (to counter the ever increasing proliferation of deepfakes)Part 3: Leading in disruptive timesJessica Block (EVP at Factor) used a recent read (Notes on Complexity by Neil Theise) as the lens through which she explained the importance of cultivating the right environment (over systems) for the emergent properties of transformational change to "bubble" up. Part 4: Evaluating what's actually workingSigge Labor (President at Legora) explained for us the work that Legora performs to understand frontier model performance and how they react to new developments and assess leaps in capabilities. We anticipate that in 2026 more and more legal teams and firms will invest in their evaluation capabilities, and this conversation (that accompanied the release of GPT5 in the summer) is one to check out if you haven't already. Part 5: The skills we might loseDan Hunter (Executive Dean, The Dickson Poon School of Law, King's College London) talked of the "terrifying bind" we encounter as we offload more and more cognitive work to compute - the work may get easier and more efficient but our cognitive development doesn't replicate (in terms of resilience) the old training training pathway. He has immediate concerns in the classroom and anticipates a coming gap in law firm talent pipelines.These are just glimpses. Check out our Spotify, Apple Podcasts, or Substack pages for the full conversations. Thank you for listening, supporting, and championing the show. We wish you a happy new year — Series 2 is coming soon 👀
We're joined by Peter Duffy for our quarterly ritual of dissecting the big headlines of Peter's popular Legal Tech Trends newsletter and ruminating on their potential implications for legal service delivery.  Peter returns wide eyed and optimistic fresh off some time in the US, where he enjoyed attending TLTF in Austin.What gets covered:The eternal "legal-specific vs. frontier model" debate — With Gemini 3 dropping and capabilities proliferating into vertical spaces, Peter weighs in on whether specialised legal AI still has an edge.PE is coming for BigLaw — McDermott exploring MSO structures to let private equity in; 20% of UK firms eyeing PE money; we explore the uncomfortable questions: (i) does outside capital corrupt lawyer independence? (ii) does PE change the fabric of the firm and its operation?  The vibes have shifted — Wild stat emanating from the PWC Law Firm Survey - Top-100 law firms expecting AI to boost revenue dropped from 69% (2023) to 31% (2025). Meanwhile, in-house teams are having their main character moment with a 24-point jump in AI optimism. Is this gap telling? Product chaos continues — Norm AI spinning up an actual law firm (!), Crosby raising $20M for Slack-native contract review, Legora's client portal coming Q1 2026, and Linklaters designating 20 "AI lawyers" to build workflows.Listen if: You’re worried your LinkedIn feed isn’t giving you enough legal technology news 😂  OR maybe you’re curious to experiment to see what else is going on out there (beyond this platform)? Rate, subscribe, comment, and share if you enjoyed this chat with Peter!For more conversations at the intersection of law and technology, head to⁠ https://lawwhatsnext.substack.com/⁠ for: (i) discussions with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
Hot off the heels of breaking legal LinkedIn last week, we caught up with Jamie Tso - a Hong Kong-based lawyer who's been building-in-public and sparking conversations across the legal community with his viral AI creations. This is a watch (don't only listen) episode. Jamie screen-shares his way through Google AI Studio, live-coding lightweight versions of legal tech tools we all know. Jamie walks through his "SpellPage" contract editor (inspired by a novel-writing app, naturally), demonstrates real-time AI-powered redlining, and casually drops the concept of an open-source "legal AI operating system" built from first-principles that could democratise access to common technology workflows we are building to support common practices across legal. His philosophy? The barrier to entry is now so low that sophisticated AI tools "should be free, more or less."Key moments:Live demo of AI-powered contract editing with natural language instructionsWhy Google AI Studio is the ultimate one-stop shop (native API keys, version control, GitHub integration, no coding required)The shift from chatting with AI → AI using tools → AI spinning up mini-apps on the goJamie's vision for consolidating legal workflows into reusable, customisable modulesMust-read context: Check out Jamie's viral posts that sparked this conversation:The contract editor build"Gemini 3 is basically AGI at this point"This is what building in the age of AI looks like - experimental, exhilarating, unnerving and transformative.  If you enjoyed this episode with Jamie, please like, subscribe, comment, and share!For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
Alex and Tom step aside for this one—no hosts, no scripts—just Andy Cooke (CLO of Perk) and Sam Ross (CLO of Remote) in conversation. What begins with nursery vomiting bugs quickly evolves into a refreshingly honest exploration of what it means to lead a legal function in disruptive technology companies. They dissect the tension between being "bold, brief, and gone" versus staying in the room to build genuine relationships, challenge the limiting "trusted advisor" archetype, and wrestle with when precision matters less than context and authentic communication. Andy and Sam don't just theorise—they get personal about the moments that test you: deciding whether someone's lying on an expense claim, navigating board dynamics when you're the least financially fluent person in the room, and maintaining ethical standards when the stakes are existential. But this isn't a heavy-handed meditation on professional responsibility. The conversation crackles with levity and self-awareness - from Sam's admission that he deliberately uses humour to "pierce through" hierarchy, to their shared recognition that being fallible and human is part of doing the job right. Both find genuine joy in what they do, drawing energy from learning from others and building networks that pay dividends years later. It's a masterclass in thoughtful leadership wrapped in the warmth of two friends who clearly respect each other's craft - and aren't afraid to acknowledge when they get it wrong. If you enjoyed this episode, please like, subscribe, comment, and share! It gives us a warm fuzzy feeling and helps make our podcast more discoverable to other podcast aficionados! For more thought-provoking conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; and (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
Creating content has never been easier. With both LLMs and world models (Sora 2, Veo 3, Marble) the fidelity of what we can produce at the tip of a prompt is getting genuinely scary.  Guy Shahar is the CEO and founder of Blee, a Y Combinator-backed AI content compliance platform that helps companies review and oversee marketing materials at scale. Before founding Blee nearly four years ago, Guy led marketing operations at Adobe for five years, and he has been witnessing firsthand the explosion of AI-generated content and deliberating on the implications. We sit down with Guy in this short conversation to discuss: (1) the rising proliferation of AI generated content; (2) the cyber-like threat of deepfakes and bad actor impersonation; and (3) the new opportunities large language and world models present for some of the world's largest brands in how they generate and manage their production of compelling content.Key TakeawaysThe "Content Tsunami" is here and it's only getting bigger - Content creation has exploded with AI, fundamentally changing the speed and volume at which companies can produce marketing materials. What used to take weeks now happens in hours. Guy calls this the "content tsunami" - a relentless wave of content being generated across all digital channels. But the gap between how fast content can be created and how fast it can be safely approved is widening, creating significant risk exposure for companies and their brands.Deepfakes aren't just a detection problem - they're a trust problem - One real danger of deepfakes isn't just that bad actors can create convincing fake content - it's that they're eroding trust in everything we see online. The recent deepfake of Irish presidential candidate Catherine Connolly which went viral in Ireland, which falsely showed her withdrawing from the race just days before the election and remained live for 12 hours, demonstrates how sophisticated and damaging this content has become. AI compliance creates new opportunities for how teams work - While AI-generated content creates new risks, it also opens unprecedented opportunities to transform workflows and team structures. Guy promotes the potential for companies to rethink their entire "content supply chain" - testing 50 or 100 versions of marketing materials instead of just two, delivering hyper-personalised content at scale, and breaking down silos between marketing, legal, GTM and compliance teams. Key References from Our ConversationCatherine Connolly Deepfake Incident: An AI-generated video falsely depicting Irish presidential candidate Catherine Connolly withdrawing from the race surfaced just days before the October 2025 election, viewed nearly 30,000 times over 12 hours before Meta removed it - a stark example of how deepfakes can threaten democratic processes and why rapid content monitoring matters.Content Authenticity Initiative (CAI): an open standard verification system with over 900 member companies working to authenticate digital content and combat deepfakes through content credentials and metadata tracking.Dana Rao: Adobe's former General Counsel and Chief Trust Officer, is mentioned for his perspective on deepfakes and the transition from trying to detect fakes to proving authenticity - Dana appeared on an earlier episode of Law://WhatsNext which you can access here. If you found this episode interesting, please like, subscribe, comment, and share!For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
We sit down with Kevin Cohn, Chief Customer Officer at Brightflag, who occupies one of the most unique vantage points in legal services – the interface between corporate legal departments and their outside counsel. We’re sure any AI start up would pay a premium to understand the work that is going to law firms and the value and time it takes to deliver that, right? By processing billions of dollars in legal invoices, Kevin and his team have unprecedented visibility to spot macro trends – from law firm partner utilisation patterns to staffing changes. In this lively but familiar conversation (we’ve each all known one another for a few years) Kevin reveals some emerging trends very relevant to the technological revolution we are all experiencing. Beyond the predictable conversation about rising law firm rates, Kevin shares two interesting developments the Brightflag team are noticing: increased partner utilisation (which might actually be good news if total hours are decreasing?)something more troubling – what Kevin diplomatically calls "not the most above board" AI-enabled billing practices, with ever more invoices showing suspicious six-minute increments.We also talk about the evolution of skills and relationships in an era when both clients and counsel are being shaped by automation and analytics.Kevin is also our first Law://WhatsNext guest to have an AI version of himself shipped and deployed to give Brightflag customers on-demand access to his expertise on legal operations and spend management. While Kevin Clone can handle questions about invoice review and matter management workflows with ease, we discover its limits when we request an Italian wine pairing. The Clone politely deflects: "I'm here to focus on legal operations and Brightflag." Some things simply can't be replicated by AI. The real Kevin remains irreplaceable.Key ReferencesBrightflag LinkedIn Post — Introducing Kevin CloneAsk Kevin Clone a Question — BrightflagIf you enjoyed this episode, please like, subscribe, comment, and share! It helps more people discover conversations like this.  For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential.
In a twist to what has probably become our “normal” programming, this episode features just the two of us in conversation. We explore the implications of technological progress - from the shift we’re contemplating from AI-infused linear workflows to fully agentic ones, to the risks and vulnerabilities baked into today’s LLM architectures. Essentially, it’s the kind of discussion we often have offline, brought into the open.The following pieces ground our discussion:From linear AI-infused workflows to fully agentic - new skills and orchestration challengesLegal AI’s Future Is Railroads, But Speeding Up Canals Still Makes Sense For Now by Alex Herrity  The Problem with Agentic AI in 2025 by Sangeet Paul Choudary - The original article featuring the canals vs railroads analogy that inspired Alex's piecePrompt Injection Attacks & AI Governance:The Lethal Trifecta for AI Agents by Simon Willison - defining the three dangerous elements that enable prompt injection attacksPrompt Injections as Far as the Eye Can See by Simon Willison - Johann Rehberger's "Month of AI Bugs" research demonstrating widespread prompt injection vulnerabilitiesI Accidentally Became a ChatGPT Surveillance Node by Juliana Jackson - The article Tom and Alex discuss revealing OpenAI's buggy infrastructure leaking private conversationsChatGPT Scrapes Google and Leaks Your Prompts - Quantable Analytics - Technical breakdown of the ChatGPT prompt leakage issueIf you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential
This week we sit down with Memme Onwudiwe for a conversation that starts in a Harvard Law classroom - transitions to his building an AI company before ChatGPT was a thing - and ends up in outer space 🚀Memme co-founded Evisort while at Harvard Law School in 2016, building AI-powered contract intelligence from the Harvard Innovation Lab years before it became mainstream. Workday acquired the company in October 2024, where Memme now serves as an AI Evangelist. Memme returns to Harvard each spring to teach legal entrepreneurship alongside co-founder Jerry Ting, and he’s a published space law scholar whose paper “Africa and the Artemis Accords” examines how emerging nations can secure their stake in the space economy.Key ReferencesAcademic ResearchAfrica and the Artemis Accords — Memme Onwudiwe & Kwame Newton, New Space (2021)Legal FrameworksArtemis Accords — Non-binding bilateral space exploration principles (2020, 55+ signatories)Outer Space Treaty — Foundational UN space law treaty (1967)Moon Agreement — “Common heritage” framework (1979, 18 signatories)OrganizationsHarvard Innovation Labs — Where Evisort was foundedCLOC — Corporate Legal Operations Consortium (6,300+ members)Space Beach Law Lab — Annual space law conference, Feb 24-26, 2026, Long BeachCorporateWorkday-Evisort Acquisition — ~$310M, closed Oct 2024If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com.
Nicole Braddick needs no introduction - but if you had to rush one for the purposes of publishing a podcast 👀 you might say she’s the Global Head of Innovation at Factor Law, following the February 2025 acquisition of her company, Theory & Principle, where she served as CEO and Founder. A former trial lawyer who transitioned into legal tech 15 years ago, Nicole has been one of the industry's most persistent advocates for bringing modern design and development practices to legal technology. Her team has worked with leading law firms, legal tech companies, corporate legal departments, non-profits and public sector organisations to build custom solutions focused on user experience - transforming an industry that, when she started, was "purely functional" and "engineering-led" into one where good design is finally recognised as essential.We get into all of that and more during our discussion, and lean in hard for Nicole’s system wide view and perspective on what’s happening at present.. Key TakeawaysNicole advocates that the calculation around build versus buy has fundamentally changed with generative AI. She argues that corporate legal departments should consider getting enterprise accounts with providers like Anthropic or OpenAI and should be building their muscles for developing internal customised solutions rather than defaulting to SaaS products. The proliferation of chatbots in law was appropriate when everyone was experimenting with generative AI, but Nicole believes the industry has overcorrected. Chat interfaces place enormous cognitive load on users who must craft effective prompts, whereas traditional point-and-click UIs make things easier by guiding users through structured workflows. Nicole sees the future as lying in hybrid experiences.While the AI industry races toward autonomous agents, Nicole sounds a cautionary note for legal applications. The entire value proposition of agents is "getting rid of control"- but lawyers have to wrestle with their ethical obligations and duties to control, to check, and to approve. Nicole sees this as a fascinating design challenge: where previous UX best practices focused on removing friction to create seamless experiences, Nicole and her team are actively considering where they must now strategically add friction and interruption points, believing the goal is to prevent lawyers from blindly clicking "yes, yes, yes" while avoiding so much friction that they abandon the tool. If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis and visualisation of how AI is augmenting our potential
We sit down with Stephan Breidenbach, co-founder of the Rulemapping Group and a German scholar who's been quietly revolutionising how we think about law, technology, and democratic governance since the early 2000s.What started as a teaching tool to help law students visualise complex legal reasoning has evolved into something far more ambitious: a comprehensive system for transforming laws into executable code that maintains human oversight while dramatically improving access to justice.Stephan's present work spans three critical areas: decision automation (turning legal rules into fast, transparent systems), rule-based AI (supporting human lawyers with explainable reasoning), and law as code (drafting legislation that's both human and machine-readable from day one).Some of our highlights from the conversation:The Transparency Imperative: "I would never trust an LLM with a legal process because it's confabulating" Stephan declares, highlighting why the Rulemapping approach prioritises explainable AI over black-box solutions. Their system lets human decision-makers see exactly how the AI reached its conclusions – a "zoom in, zoom out" process that mirrors how lawyers naturally think.Democracy-First Technology: Unlike Silicon Valley's "move fast and break things" mentality, Stephan advocates for keeping humans in the loop even when AI becomes more accurate: "I think it's very important for trust in the legal system and therefore in a democratic system that there are human beings, even if they make worse decisions."Access to Justice at Scale: Through real-world deployments like processing 500,000 diesel emission scandal cases and serving as Europe's first certified Digital Services Act dispute resolution body, Rulemapping demonstrates how thoughtful automation can make legal systems accessible to everyone, not just those who can afford lawyers.We also explore the behavioural risks of over-relying on automated systems, the potential for "law as code" to improve democratic participation, and Stephan's vision of embedded law that serves citizens rather than bureaucracy.If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for more of the same.
We catch up with Ben Martin, the former Director of Privacy at Trustpilot and author of "GDPR for Startups," who's currently living his best life somewhere in the Estonian wilderness with a camper van, fishing rod, and blessed freedom from subject access requests. Having built privacy programs at high-growth companies like Trustpilot, Ovo Energy, and King Digital Entertainment, Ben brings a refreshingly practical perspective to privacy law that goes way beyond compliance theatre.From his sabbatical perch in the Nordics, he reflects on everything from why GDPR hasn't quite delivered its promised outcomes to how privacy lawyers are uniquely positioned to lead AI governance.What We Cover:The Sabbatical Chronicles: Ben's epic Nordic adventure and why stepping away from work sometimes gives you the clearest perspective on itPrivacy Program Building: Moving from compliance theatre to business enablement, and why good privacy programs start with genuine curiosity about productsGDPR Reality Check: Why the regulation might not have quite yet delivered its intended outcomes and the types of privacy lawyers and approaches Ben sees in practiceAI Governance Evolution: How privacy professionals are naturally stepping into AI oversight roles and what new skills they need to developTechnical Literacy: The importance of understanding what your business actually builds and Ben's practical approach to learning complex technical conceptsKey References:GDPR for Startups - Ben's practical guide to building privacy programs in high-growth companiesField Fisher Privacy Newsletter - Legal developments summary that Ben recommends for staying currentHard Fork Podcast - Ben's go-to for broad tech and AI developmentsLovable - The AI coding platform Ben's been experimenting with to build his habit tracker (and recruit his girlfriend as user number one)If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.
This week we sat down with Dan Hunter, Executive Dean of the Dickson Poon School of Law at King's College London and serial legal tech entrepreneur. Dan's journey spans academia across three continents, four successful startups (including his current venture GraceView), and decades of research on the cognitive science of legal reasoning. As both an educator training the next generation of lawyers and an entrepreneur building AI-powered legal solutions, he offers a unique dual perspective on the transformation underway across knowledge work.Key Takeaways1. The Learning Paradox: AI Makes Us Feel Smarter While Making Us DumberStudents using large language models consistently perform better on assignments and believe they're learning more - but when the AI is removed, they've retained virtually nothing. This creates a dangerous illusion of competence (sycophantic models propagate this!) that law schools and firms must address through new assessment methods and training approaches.2. We're Heading Toward a "Barbell" Legal ProfessionTraditional pyramid law firm structures will collapse as AI automates much of the work. Dan believes the future involves senior lawyers managing client relationships at the top, AI agents handling routine tasks in the middle, and "legal engineers" swarming around validating AI outputs and steering the models.3. Entry-Level Legal Jobs Are Already DisappearingWe discuss the recent Stanford research "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence" by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, Stanford Digital Economy Lab (2025) - The landmark study using ADP payroll data showing 13% employment decline for young workers in AI-exposed occupations.Interested in more?If you found this episode interesting, please like, subscribe to the show, comment, and share! For more thought-provoking content at the intersection of law and technology, head to our Law://WhatsNext home for:Focused conversations with leading practitioners, technologists, and educatorsDeep dives into the intersection of law, technology, and organisational behaviourPractical analysis and visualisation of how AI is augmenting our potential
We have fun sitting down with Dana Rao (the former General Counsel and Chief Trust Officer at Adobe) - where we cover the implications of AI progress on: regulatory frameworks and geopolitics; copyright law; deepfakes - including content proliferation and authenticity; fair use and Dana’s take on the current class action lawsuits in the US; and Dana’s proposals for a new impressionistic right for creators to stave off the economic harms of their work being imitated. The conversation provided us with a fascinating insight into life at Adobe at the moment the performance of these generative models really began to take-off, and it was clear to us that Dana and his team played a pivotal role in shaping not only what kind of products Adobe went on to develop but how they would be distributed and consumed by their users!This episode draws on Dana's extensive experience at the intersection of technology, law and policy. Here are the key references and cases we discussed:Legal Cases:Andy Warhol Foundation for Visual Arts, Inc. v. Goldsmith, 598 U.S. 508 (2023) -  The Supreme Court case that Dana argues will have an influence in the outcome of AI fair use battles (which are focussed on economic competition between uses)Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., No. - 1:20-CV-613-SB (D. Del. Feb. 11, 2025) - The "Westlaw case" Dana mentioned where the judge initially ruled for the AI company but changed his mind after better understanding the technologyDana's Policy Work:Senate Judiciary Committee Testimony (July 12, 2023) - Dana's appearance before the Senate Subcommittee on Intellectual Property hearing titled "Artificial Intelligence and Intellectual Property – Part II: Copyright"Adobe's Proposed Anti-Impersonation Law - Dana's legislative proposal for federal protection against AI-powered style imitationContent Authenticity Standards:Content Authenticity Initiative (CAI) - Adobe-founded initiative with over 5,000 members working to establish content provenance standardsCoalition for Content Provenance and Authenticity (C2PA) - The formal standards organization co-founded by Adobe, Microsoft, Intel, Arm, BBC, and Truepic under the Linux FoundationC2PA Implementation in Google Pixel Phones - Recent adoption of content authenticity standards in consumer devicesIf you found this episode of Law://WhatsNext interesting, please rate, subscribe, comment, and share!
Rapid dispatch: we pulled in Sigge Labor (CTO) and Jacob Johnsson (Legal Engineer) from Legora - one of the fastest-growing AI companies in the world (and one of the few with early access to GPT-5) for a chat about OpenAI's recent model release.They share what it’s already unlocking for legal reasoning, why their “battle evals” put GPT-5 ahead 80%+ of the time across a host of legal tasks, and how its new steerability could reshape the way lawyers (and the tools they use) interact with the model (including through Legora).This is part two and concludes our GPT-5 launch mini-series - snappy, unpolished, and recorded while the paint’s still wet. If you like these hot-off-the-press deep dives, tell us (and more importantly… tell the algorithm by: rating, reviewing, and telling your friends).
Emergency drop: we grabbed Jake Jones (CPO & Co-Founder, Flank) for a quick-fire reaction to OpenAI’s ChatGPT-5 launch. We cover his day-one impressions, what it means for legal products (including Flank), and the downstream implications for how legal work gets done. A short detour from our usual programming—did you enjoy this rapid-response format? If yes, please like, rate, and share to help Law://WhatsNext reach more people.
The Future Lawyer

The Future Lawyer

2025-07-2949:03

In this compelling episode of Law://WhatsNext, hosts Tom & Alex dive into the transformative shifts underway in legal education and junior lawyer development. Joined by three visionary voices - Lucie Allen (Managing Director, Barbri), Rob Elvin (Partner, Squire Patton Boggs), and Sophie Correia (Trainee Solicitor, TravelPerk) - the discussion explores provocative ideas reshaping what it means to be a lawyer.Do Lawyers Even Need to Know the Law?Sophie Correia challenges the traditional emphasis on memorisation and technical rules in legal education. Reflecting on her real-world experiences at a tech scale-up, Sophie argues that success hinges more on human skills such as communication, empathy, and trust-building, rather than recalling obscure statutes.The Flawed Incentives of Legal TrainingRob Elvin sheds light on systemic issues stemming from the billable hour model, which prioritises short-term profitability over effective mentoring. He advocates for a groundbreaking solution: linking career progression directly to the quality of trainee supervision, potentially transforming mentorship from a luxury into an essential career catalyst.The AI DisconnectLucie Allen identifies a critical gap in legal education - the absence of meaningful engagement with AI and technology. Despite these tools reshaping the profession, current frameworks like the SQE neglect to equip trainees adequately for technological realities, posing a substantial risk to their future readiness.Three Ideas to Transform Legal Education:Continuous Learning as the New Norm: Education doesn't stop at qualification. Lucie emphasises the necessity of lifelong learning, driven by relentless curiosity and adaptation to change.Human Skills Set Lawyers Apart: Sophie highlights the enduring value of human-centric capabilities—understanding people, navigating complexity, and ethical reasoning—as indispensable traits lawyers must cultivate.Systemic Change through Collective Responsibility: Rob, Lucie, and Sophie underline the importance of personal agency and collaborative effort in driving substantial reform across education, training, and regulatory frameworks.A Hopeful Path ForwardUltimately, the podcast champions a future in which tomorrow’s lawyers blend ethical judgment, technological proficiency, and interpersonal insight, prompting listeners to reconsider not whether lawyers need to know the law, but rather what precisely they need to know—and how to prepare them best for the evolving landscape.Join us for an inspiring conversation that challenges conventional wisdom and points toward an empowered, adaptable, and human-centred future for the legal profession.
loading
Comments 
loading