Trust at Scale: Nam Nguyen on How TruthSystems is Building the Framework for Safe AI in Law
Description
Artificial intelligence has moved fast, but trust has not kept pace. In this episode, Nam Nguyen, co-founder and COO of TruthSystems.ai, joins Greg Lambert and Marlene Gebauer to unpack what it means to build “trust infrastructure” for AI in law. Nguyen’s background is unusually cross-wired—linguistics, computer science, and applied AI research at Stanford Law—giving him a clear view of both the language and logic behind responsible machine reasoning. From his early work in Vietnam to collaborations at Stanford with Dr. Megan Ma, Nguyen has focused on a central question: who ensures that the systems shaping legal work remain safe, compliant, and accountable?
Nguyen explains that TruthSystems emerged from this question as a company focused on operationalizing trust, not theorizing about it. Rather than publishing white papers on AI ethics, his team builds the guardrails law firms need now. Their platform, Charter, acts as a governance layer that can monitor, restrict, and guide AI use across firm environments in real time. Whether a lawyer is drafting in ChatGPT, experimenting with CoCounsel, or testing Copilot, Charter helps firms enforce both client restrictions and internal policies before a breach or misstep occurs. It’s an attempt to turn trust from a static policy on a SharePoint site into a living, automated practice.
A core principle of Nguyen’s work is that AI should be both the subject and the infrastructure of governance. In other words, AI deserves oversight but is also uniquely suited to implement it. Because large language models excel at interpreting text and managing unstructured data, they can help detect compliance or ethical risks as they happen. TruthSystems’ vision is to make governance continuous and adaptive, embedding it directly into lawyers’ daily workflows. The aim is not to slow innovation, but to make it sustainable and auditable.
The conversation also tackles the myth of “hallucination-free” systems. Nguyen is candid about the limitations of retrieval-augmented generation, noting that both retrieval and generation introduce their own failure modes. He argues that most models have been trained to sound confident rather than be accurate, penalizing expressions of uncertainty. TruthSystems takes the opposite approach, favoring smaller, predictable models that reward contradiction-spotting and verification. His critique offers a reminder that speed and safety in AI rarely coexist by accident—they must be engineered together.
Finally, Nguyen discusses TruthSystems’ recent $4 million seed round, led by Gradient Ventures and Lightspeed, which will fund the expansion of their real-time visibility tools and firm partnerships. He envisions a future where firms treat governance not as red tape but as a differentiator, using data on AI use to assure clients and regulators alike. As he puts it, compliance will no longer be the blocker to innovation—it will be the proof of trust at scale.
Listen on mobile platforms: Apple Podcasts | Spotify | YouTube
[Special Thanks to Legal Technology Hub for their sponsoring this episode.]
Email: geekinreviewpodcast@gmail.com
Music: Jerry David DeCicca
Transcript:























