DiscoverChain of ThoughtExplaining Eval Engineering | Galileo's Vikram Chatterji
Explaining Eval Engineering | Galileo's Vikram Chatterji

Explaining Eval Engineering | Galileo's Vikram Chatterji

Update: 2025-12-19
Share

Description

You've heard of evaluations—but eval engineering is the difference between AI that ships and AI that's stuck in prototype.

Most teams still treat evals like unit tests: write them once, check a box, move on. But when you're deploying agents that make real decisions, touch real customers, and cost real money, those one-time tests don't cut it. The companies actually shipping production AI at scale have figured out something different—they've turned evaluations into infrastructure, into IP, into the layer where domain expertise becomes executable governance.

Vikram Chatterji, CEO and Co-founder of Galileo, returns to Chain of Thought to break down eval engineering: what it is, why it's becoming a dedicated discipline, and what it takes to actually make it work. Vikram shares why generic evals are plateauing, how continuous learning loops drive accuracy, and why he predicts "eval engineer" will become as common a role as "prompt engineer" once was.

In this conversation, Conor and Vikram explore:

  • Why treating evals as infrastructure—not checkboxes—separates production AI from prototypes
  • The plateau problem: why generic LLM-as-a-judge metrics can't break 90% accuracy
  • How continuous human feedback loops improve eval precision over time
  • The emerging "eval engineer" role and what the job actually looks like
  • Why 60-70% of AI engineers' time is already spent on evals
  • What multi-agent systems mean for the future of evaluation
  • Vikram's framework for baking trust AND control into agentic applications

Plus: Conor shares news about his move to Modular and what it means for Chain of Thought going forward.

Chapters:00:00 – Introduction: Why Evals Are Becoming IP01:37 – What Is Eval Engineering?04:24 – The Eval Engineering Course for Developers05:24 – Generic Evals Are Plateauing08:21 – Continuous Learning and Human Feedback11:01 – Human Feedback Loops and Eval Calibration13:37 – The Emerging Eval Engineer Role16:15 – What Production AI Teams Actually Spend Time On18:52 – Customer Impact and Lessons Learned24:28 – Multi-Agent Systems and the Future of Evals30:27 – MCP, A2A Protocols, and Agent Authentication33:23 – The Eval Engineer Role: Product-Minded + Technical34:53 – Final Thoughts: Trust, Control, and What's Next

Connect with Conor Bronsdon:Substack – https://conorbronsdon.substack.com/LinkedIn – https://www.linkedin.com/in/conorbronsdon/X (Twitter) – https://x.com/ConorBronsdon

Learn more about Eval Engineering:⁠https://galileo.ai/evalengineering⁠

Connect with Vikram Chatterji:LinkedIn – ⁠https://www.linkedin.com/in/vikram-chatterji/⁠

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Explaining Eval Engineering | Galileo's Vikram Chatterji

Explaining Eval Engineering | Galileo's Vikram Chatterji

Conor Bronsdon