DiscoverBenchtalks
Benchtalks
Claim Ownership

Benchtalks

Author: Snorkel AI

Subscribed: 0Played: 0
Share

Description

Benchtalks is Snorkel AI's podcast series at the intersection of AI evaluation, data quality, and real-world impact. Hosted by the Snorkel team, each episode brings together researchers, practitioners, and leaders to dig into the questions that matter most as AI benchmarks grow more sophisticated, dynamic, and reflective of the complexity found in real-world deployments.

We explore the full stack of what it takes to build AI that actually works — from the design of rigorous, open benchmarks that close the gap between what we measure and what we encounter in production, to the expert-in-the-loop data creation and curation pipelines that make reliable evaluation possible. Along the way, we get into reinforcement learning, reward modeling, and the evolving science of data quality that underpins it all.


Whether you're building agents that operate over long horizons, crafting rubrics that go beyond pass/fail, or trying to understand what "good" looks like for a multi-artifact deliverable — this is the conversation for you.


New episodes drop regularly on YouTube and wherever you get your podcasts. Follow us @SnorkelAI (LinkedIn, X, YouTube) to stay current as the field moves fast. 

1 Episodes
Reverse
In this inaugural episode of Benchtalks, Snorkel AI co-founder Vincent Chen sits down with Alex Shaw, MTS at Laude Institute and co-creator of Terminal-Bench, to unpack what the rapid hill-climbing on TB2 reveals about the state of AI agent evaluation — and where the field needs to go. This interview covers: Why TB2 went from 20–30% during development to 75–80% at the frontier todayThe bet on the terminal as the right abstraction for general computer useHow Harbor became a benchmark fa...
Comments