Discover
Barrchives
Barrchives
Author: Barr Yaron
Subscribed: 1Played: 15Subscribe
Share
© Barr Yaron
Description
Founders are trying to figure out how to build their companies (and cultures) around AI. But the specifics — like how much (if at all) to invest in research, how to build the right team, and how to evaluate whether their systems work as intended—are still up in the air. Join Barrchives as we speak with the founder-CEOs who are pushing the boundaries of AI, and share their decisions and their stories. Hosted by Barr Yaron, Partner at Amplify Partners.
23 Episodes
Reverse
Temporal’s co-founders join Barr Yaron and Lenny Pruss to unpack how durable execution became the backbone of modern distributed apps and why it’s a perfect fit for AI agents. Samar and Max trace the path from Amazon SWF to Uber’s Cadence to founding Temporal, dig into developer experience choices, hard lessons with Cassandra, and what “code that can’t crash” really means in practice. This episode also covers open source strategy, multi-agent orchestration, Nexus RPC, how startups and enterprises are adopting Temporal, and what scaling the company taught them as leaders.If you ship backend systems, build AI agents, or care about reliability at scale, this one’s for you.This episode is broken down into the following chapters:00:00–00:06 — Origins: Samar and Max, Barr and Lenny, why this convo00:06–00:14 — Early systems: Amazon SWF → Azure Durable Task; DX lessons00:14–00:23 — Uber years: replacing Kafka, “Jeremy,” birthing Cadence, open-sourcing from day one00:23–00:31 — Durable execution, explained; code-first over DSLs; SDK ergonomics (Go/Java)00:31–00:36 — Hard tech war stories: Cassandra, queues on Cassandra, multi-region replication00:36–00:45 — AI agents ≈ dynamic workflows; why Temporal fits agents and tools00:45–00:59 — Roadmap: streaming, large payloads, data workflows, Nexus RPC for long-running calls00:59–01:13 — Adoption & GTM: digital natives, fintech, startups; greenfield AI vs brownfield; “Temporal is overkill?”01:13–01:22 — Case study: OpenAI Codex on Temporal; internal dogfooding01:22–01:40 — Scaling the org vs. scaling the tech; remote shift, hiring; mission & next five yearsSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
Factory co-founder and CEO Matan Grinberg joins Barr Yaron to talk about the future of agent-driven development, why enterprise migrations are the perfect wedge for AI adoption, and how software engineering is moving toward a world where humans orchestrate instead of implement.They dive into Factory’s origin story, the challenges of building AI systems for large organizations, and what the world might look like when millions of “droids” (AI agents) collaborate on software. Along the way, Matan shares surprising use cases, lessons from working with enterprises, and how his personal journey—from physics to burritos to building Factory—has shaped his leadership.This episode is broken down into the following chapters:00:00 – Intro and welcome01:06 – Founding Factory: from ChatGPT experiments to AI engineers in every tab04:05 – Early vision: autonomy for software engineering06:14 – Why focus on the enterprise vs. indie developers08:29 – Behavior change and technical challenges in large orgs10:25 – Using painful migrations as a wedge for adoption12:20 – The paradigm shift to agent-driven development15:59 – Ubiquity: making droids available across IDEs, Slack, Jira, and more17:16 – Why droids need the same context as human engineers20:15 – Memory, configurability, and organizational learning23:05 – How many droids? Specialization vs. general purpose agents25:34 – Bespoke vs. common workflows across enterprises27:06 – The hardest droid to build: coding itself28:26 – Testing, costs, and scaling agentic workflows30:29 – Why observability is essential for trustworthy agents31:28 – Surprising use cases: PM adoption and GDPR audits34:02 – Who Factory is building for: PMs, juniors, seniors, and beyond36:09 – Systems thinking as the core engineering skill38:09 – Building for enterprise trust: guardrails and governance40:35 – What’s missing at the model layer today42:43 – Migrations as a go-to wedge in go-to-market43:53 – The thought experiment: what if 1M engineers collaborated?46:07 – Scaling agent orgs: structure, monitoring, and observability48:46 – Why everything must be recorded for droids to succeed50:11 – Recruiting people obsessed with software development51:37 – Burritos, routines, and how Matan has changed as a leader53:41 – From coffee to Celsius, and why team culture matters most54:20 – Closing thoughts: the future when agents are truly ubiquitousSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
Datadog CEO Olivier Pomel joins Barr Yaron and Sunil Dhaliwal to discuss the evolution of observability, the role of AI inside Datadog, and how the future of software development will be shaped by agents, voice interfaces, and new approaches to monitoring.From Datadog’s origins in the cloud shift to its latest AI-driven products like Bits AI, Olivier shares how the company is building for the future while staying deeply connected to its customers’ real problems.Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
In this episode of Barrchives, Barr Yaron sits down with Simon Eskildsen, co-founder and CEO of turbopuffer, to explore how he went from infrastructure challenges at Shopify to launching a groundbreaking vector database company.Simon shares his journey from recognizing the inefficiencies of traditional vector storage solutions to creating TurboPuffer, a revolutionary database designed specifically for AI-driven applications. He details key moments of insight—from working with startups struggling with prohibitive storage costs, to realizing the untapped potential of affordable object storage combined with modern vector indexing techniques.This episode is broken down into the following chapters:00:00 – Intro: Simon Eskildsen, Founder of TurboPuffer00:26 – The “aha” moment: Simon’s transition from Shopify and startup consulting to founding TurboPuffer03:13 – Turning “strings into things”: The power of vector search05:51 – Why vector databases? Economic drivers and technology shifts07:35 – Building TurboPuffer V1: Key architecture choices and early trade-offs10:44 – Challenges of indexing: Evaluating exhaustive search, HNSW, and clustering17:23 – Finding product-market fit with Cursor: TurboPuffer’s first major customer20:05 – Defining TurboPuffer’s ideal customer profile and market positioning23:43 – Gaining conviction: When Simon knew TurboPuffer would scale25:39 – TurboPuffer V2: Architectural evolution and incremental indexing improvements32:12 – How AI-native workloads fundamentally change database design35:41 – Key trade-offs in TurboPuffer’s database architecture (accuracy, latency, and cost)38:07 – Ensuring vector database accuracy: Production vs. academic benchmarks41:03 – Deciding when TurboPuffer was ready for General Availability (GA)42:27 – The future of vector search and storage needs for AI agents45:03 – Building customer-centric engineering teams at TurboPuffer47:12 – Common storage hygiene mistakes (or opportunities) in AI companies49:42 – Simon’s personal growth as a leader since founding TurboPufferSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
AI won’t fix healthcare unless it starts with the conversation. In this episode, Zachary Lipton—Chief Technology & Science Officer at Abridge and Raj Reddy Associate Professor of Machine Learning at Carnegie Mellon University—joins Barr Yaron for a deep, technical, and emotional dive into how AI can truly transform clinical care. From building a world-class ambient documentation system to tackling speech recognition in 28 languages, Zack shares what it takes to engineer trust into AI when the stakes are patient lives, not just clicks.We cover:Why general-purpose models fail in clinical settingsHow Abridge designs for accuracy, context, and trustThe tension between personalization and evaluationWhy ambient AI might be the most promising foundation for fixing healthcareThis is one of the most in-depth looks at what it actually takes to build production-grade AI in medicine. This episode is broken down into the following chapters:00:00 – Intro00:34 – What Abridge actually does (hint: it’s not just notes)01:09 – Why documentation is killing the healthcare experience03:05 – How we got to the current burnout crisis04:16 – The key insight: healthcare is a conversation07:33 – Building a digital scribe: the original vision for Abridge09:15 – Why off-the-shelf models don’t cut it in clinical speech11:36 – 28 languages, noisy ERs, and overlapping conversations13:20 – Predicting what enters the medical lexicon next14:21 – How Abridge adapts models for edge-case medical speech15:18 – Beyond transcripts: the complexity of clinical note generation17:10 – Foundation models are tools, not solutions18:06 – The “Ship of Theseus” strategy of model orchestration20:32 – Style transfer for doctors, patients, and payers20:54 – Metrics: ASR evaluation vs. documentation quality23:43 – Stratifying ASR performance by setting, language, and jargon24:50 – Why eval is so hard when there’s no “gold note”25:45 – The tension between personalization and general eval28:05 – Lessons from machine translation: building robust eval pipelines30:32 – Abridge’s “look at the f*cking data” (LFD) internal review33:54 – Blinded clinical eval with linked evidence and audio36:50 – Why human fallibility is just as real as AI hallucination38:21 – What kind of CTO Zack actually is40:32 – Why AI product development is its own discipline42:44 – AI innovation now lies in the product-data-model loop44:25 – Closing the loop: how design drives modeling45:25 – How Abridge hires researchers who care about product47:29 – The mission filter: if you’d be equally happy at Microsoft, go49:35 – What’s next: the AI layer for healthcare, not point solutions52:57 – Closing thoughtsSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
What if deep learning isn’t the future of AI—but just part of it?In this episode, Ori Goshen, Co-founder and Co-CEO at AI21 Labs, shares why his team set out to build reliable, deterministic AI systems—long before ChatGPT made language models mainstream.We explore the launch of Wordtune, the development of Jamba, and the release of Maestro—AI21’s orchestration engine for enterprise agentic workflows. Ori opens up about what it takes to move beyond probabilistic systems, build trust with global enterprises, and balance research and product in one of the most competitive AI markets in the world.If you want a masterclass in enterprise AI, model training, architecture tradeoffs, and scaling innovation out of Israel—this is it.🔔 Subscribe for deep dives with the people shaping the future of AI.This episode is broken down into the following chapters:00:00 – Intro00:47 – Why AI21 started with “deep learning is necessary but not sufficient”02:34 – Building reliable AI systems from day one03:46 – The risk of neural-symbolic hybrids and early bets on NLP05:40 – Why Wordtune became the first product08:14 – From B2C success to a pivot back into enterprise09:43 – What AI21 learned from Wordtune for enterprise AI11:15 – Defining “product algo fit”12:27 – Training models before it was cool: Jurassic, Jamba, and beyond13:38 – How to hire model-training engineers with no playbook14:53 – Recruiting systems talent: what to look for16:29 – How to orient your models around real enterprise needs17:10 – Why Jamba was designed for long-context enterprise use cases19:52 – What’s special about the Mamba + Transformer hybrid architecture22:46 – Experimentation, ablations, and finding the right architecture25:27 – Bringing Jamba to market: what enterprises actually care about29:26 – The state of enterprise AI readiness in 2023 → 202531:41 – The biggest challenge: evaluation systems32:10 – What most teams get wrong about evals33:45 – Architecting reliable, non-deterministic systems34:53 – What is Maestro and why build it now?36:02 – Replacing “prompt and pray” with AI for AI systems38:43 – Building interpretable and explicit agentic systems41:09 – Balancing control and flexibility in orchestration43:36 – What enterprise AI might actually look like in 5 years47:03 – Why Israel is a global powerhouse for AI49:44 – How Ori has evolved as a leader under extreme volatility52:26 – Staying true to your mission through chaosSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
What does it take to reimagine the browser—one of the most commoditized technologies in the world—for the enterprise?In this episode, Ofer Ben Noon, founder of Talon and now part of Palo Alto Networks, shares the wild journey from exploring digital health to building the world’s first enterprise-grade secure browser.We dig into:Why the browser became the new security perimeterHow Talon raised a $26M seed and scaled fastWhat it takes to compile Chromium daily (and why it’s so hard)Why Precision AI is essential to secure AI usage in the enterpriseAnd how generative AI, SaaS sprawl, and autonomous agents are reshaping enterprise risk in real timeIf you care about AI x cybersecurity, endpoint security, or enterprise infrastructure—this is a deep, real, and tactical look behind the curtain.This episode is broken down into the following chapters:00:00 – Intro01:05 – Why Ofer originally wanted to build in digital health02:15 – The pandemic shift to SaaS, hybrid work, and browser-first04:44 – Why Chromium was the perfect technical unlock05:27 – The insane complexity of compiling Chromium07:10 – What makes an enterprise browser different from a consumer browser09:36 – Browser isolation, web security, and file security10:50 – Why Talon needed a massive seed round from day one11:53 – What an MVP looked like for Talon14:08 – Early skepticism from CISOs and how Talon earned trust16:50 – Discovering new enterprise use cases over time17:11 – How AI and Precision AI power Talon’s security engine19:21 – Why Ofer chose to sell to Palo Alto Networks21:06 – Petabytes of data, 30B+ attacks blocked daily23:44 – The risks of LLMs and generative AI in the browser24:24 – What Talon sees when users interact with AI tools25:05 – The #1 risk: privacy and user error26:43 – Why AI use must be governed like any other SaaS27:22 – How Talon built secure enterprise access to ChatGPT28:05 – Mapping 1,000+ GenAI tools and classifying risk29:43 – Real-time blocking, DLP, and prompt visibility31:25 – Why user mistakes are accelerating in the age of agents32:04 – How autonomous AI agents amplify risk across the enterprise33:55 – The browser as the new control layer for users and AI36:57 – What AI is unlocking in cybersecurity orgs39:36 – Why data volume will determine which security companies win40:28 – Ofer’s leadership philosophy and staying grounded post-acquisition42:40 – Closing reflectionsSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
Vanta helped create the automated security and compliance category—and now, they’re redefining it with AI.In this episode, Christina Cacioppo (CEO & Co-Founder) and Iccha Sethi (VP of Engineering) join Barr Yaron to go deep on how AI is transforming the way Vanta builds products, evaluates models, and helps companies earn and demonstrate trust.They cover:- Why compliance is the perfect playground for AI- How Vanta balances reliability, explainability, and scale- What it takes to build golden datasets in high-stakes domains- The real-world AI infrastructure behind Vanta AIIf you care about real AI product development—not just hype—this is a masterclass in doing it right.🔔 Subscribe for more deep dives with leading AI builders and thinkers.This episode is broken down into the following chapters:00:00 – Intro01:06 – Christina’s early entrepreneurial roots (Beanie Babies & all)02:51 – From venture to founder: why Christina started Vanta04:00 – What Vanta actually does05:32 – Iccha on why she joined as VP of Engineering07:09 – When Vanta started leaning into AI08:33 – AI’s growing role in Vanta’s product roadmap09:52 – How AI powers questionnaire automation12:25 – Using LLMs to map policy docs to cloud configs13:27 – Building trust: human-in-the-loop and explainability16:03 – Vanta’s evaluation system for AI features18:17 – How golden datasets are constructed (and maintained)20:59 – Feedback loops: online eval from user behavior22:43 – How model feedback informs product updates23:38 – What Vanta wants from foundation models (but isn’t getting yet)24:32 – Retrieval: how Vanta processes customer documents27:13 – The hardest technical challenges in AI integration29:41 – Internal adoption: how non-technical teams are using AI too31:52 – Vanta’s centralized AI team & how other teams plug in33:27 – Internal education: building AI intuition org-wide34:31 – From prototype to production: experimentation culture36:41 – Customer sentiment around AI in compliance workflows38:22 – Enterprise buyers & the AI “kill switch”39:06 – Personalized experiences as the future of trust40:21 – How enterprises are approaching AI risk assessments41:50 – What excites Iccha and Christina about the future of AI at VantaSubscribe to the Barrchives newsletter: https://www.barrchives.comSpotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
What if AI could talk back instantly—and naturally?In this episode, Karan Goel, Co-founder & CEO of Cartesia, joins Barr Yaron to unpack the future of voice AI, state space models (SSMs), and why audio is the next frontier in AI.Karan shares the founding story behind Cartesia, explains how alternate architectures like Mamba enable ultra-efficient, low-latency inference, and walks through how his team is building the fastest text-to-speech model in the world—while obsessing over every millisecond.Whether you’re into model architectures, AI infrastructure, or the future of voice interfaces, this episode delivers technical depth, startup lessons, and a roadmap for what’s coming next.This episode is broken down into the following chapters:00:00 – Intro01:06 – Karan’s journey from CMU PhD to startup founder03:56 – Why Cartesia is built around state space models06:49 – What makes SSMs different from transformers09:14 – Why compression matters for long-running AI systems11:13 – What data types SSMs are best (and worst) for13:39 – Scaling SSMs: What’s possible and what’s missing15:31 – Hardware, GPUs & why SSMs work well on existing infra18:46 – Landing on audio: Cartesia’s first core modality22:38 – Navigating the model vs. market debate in AI startups26:36 – How Cartesia built Sonic, their ultra-low latency TTS model28:17 – Why latency is the #1 challenge in voice AI30:46 – Tricks vs. model-first thinking: Baking it into the model34:01 – How Cartesia balances fast execution with deep research36:26 – Building with part-time academic co-founders38:13 – Yes, every employee gets a personal Yoshi40:02 – Where voice AI is being adopted first (telephony + beyond)42:24 – Multilingual modeling & the long tail of language45:02 – Voice as a new computing interface46:26 – Why voice notes are the future (and Barr’s hot take)49:56 – How Cartesia evaluates its models52:44 – How Karan has grown as a founder and leaderSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
AI is changing the game for marketing, but how do businesses truly integrate AI into their workflows? In this episode, Kashish Gupta, Co-founder & Co-CEO of High Touch, joins Barr Yaron to break down the rise of Composable CDPs, the role of AI decisioning, and the future of automated marketing.They explore:- How High Touch pivoted into the Composable CDP space- Why marketers struggle with data democratization and AI adoption- How reinforcement learning (RL) agents are optimizing marketing campaigns- What AI can (and can’t) replace in modern marketing teams- The biggest technical challenges in building AI decisioning systemsIf you’re a marketer, data leader, or AI enthusiast, this episode is packed with insights into the future of AI-powered marketing.This episode is broken down into the following chapters:00:00 – Welcome & Introduction01:09 – The High Touch founding story & early pivots02:28 – Why Composable CDPs are the future of data-driven marketing03:42 – Overcoming market education challenges05:08 – The manual way of marketing vs. AI-powered decisioning09:21 – How AI decisioning works inside High Touch12:24 – Why reinforcement learning (RL) is the right AI model for marketing13:45 – How High Touch sets up reward functions for AI agents16:04 – How much data do you actually need for AI-driven marketing?17:27 – The biggest misconceptions about AI and marketing21:00 – Building an AI team at High Touch & hiring strategy24:27 – The biggest technical challenges of AI decisioning27:10 – How customers interact with AI-driven marketing models29:40 – What’s next for AI decisioning & autonomous marketing agents?34:26 – The future of AI & CDPs – where is the industry headed?37:42 – What remains uniquely human in marketing?41:23 – Advice for marketers considering AI adoption44:16 – The future of AI agents working togetherSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
This episode consists of the following chapters: 00:00 - Introduction to Jesse Zhang and Decagon02:33 - Why customer support emerged as a clear use case for AI05:00 - The importance of discovery and understanding customer value08:20 - The Decagon product architecture: core AI agent, routing, and human assistance11:01 - How enterprise logic is integrated into the AI agent15:45 - Shared frameworks across different customers and industries17:12 - How AI agents are changing organizational planning19:59 - Automatically identifying knowledge gaps to improve resolution rates22:57 - Handling routing across different modalities (text and voice)26:09 - The continued importance of humans in customer support30:17 - The evolving role of human agents: supervising, QA, and logic building36:57 - Value-based pricing tied to the work AI performs39:17 - How sophisticated buyers evaluate AI customer support solutionsSubscribe to the Barrchives newsletter: www.barrchives.comSpotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
Regal co-founders Alex Levin and Rebecca Greene see a future where AI doesn't just assist human agents in contact centers - it replaces them entirely at the front lines. In this episode of Barrchives, we discussed how they're building voice AI technology, the unique challenges of working with audio models, and what the future of customer service looks like.
Most of what you hear about AI right now is in text, but Mikey Shulman (co-founder and CEO of Suno) would tell you that audio is a much more interesting medium to work with. How do you use AI to generate music? What makes audio data uniquely difficult to parse? And how do you build audio models that cater to unique, subjective human preferences on music? Suno is building a future where anyone can make great music. In this episode of Barrchives, I sat down with Mikey (who, like his co-founder, is a musician) to talk about how they do what they do, from why they chose a transformer-based architecture to how they test new models when outputs are so subjective.
AI and data teams in a sense, kind of, do the same thing: make decisions based on data. So how do you build AI that *helps* data teams do their best work?Hex was one of the first companies in their space to embrace language models and build code generation features into their data workspace. In this episode of Barrchives, I went deep with Hex’s co-founder and CEO, Barry McCardel, about Hex’s journey towards becoming an AI company.
When it comes to building cutting-edge technology companies, Erik Bernhardsson's journey with Modal exemplifies the mix of passion, technical ingenuity, and adaptability required to succeed. From tackling infrastructure challenges in the data and AI space to navigating the nuanced dynamics of scaling a business, Modal’s story provides a masterclass in modern tech entrepreneurship.
Building a company isn’t for the faint of heart. It’s messy, nonlinear, and full of surprises. Jennifer Smith, CEO of Scribe, knows this better than most. In a recent episode of Barrchives, she shared the hard-earned lessons that helped her navigate the twists and turns of finding product-market fit.
From trusting her instincts when others doubted her to rethinking how humans and AI can work together, Jennifer’s journey offers powerful insights for anyone trying to build something transformative.
From his early work in physics and mathematical simulation to leading Vision Pro development at Apple, Amit Jain has consistently pushed the boundaries of how computers process and understand our world. Now as the founder and CEO of Luma AI, he's tackling an even bigger challenge: building what he calls a "universal imagination engine."
In this episode of Barrchives, Amit shares his perspective on why unified AI models outperform specialized ones, how visual data transforms AI reasoning, and what it takes to build infrastructure capable of training on millions of videos. But underlying these technical insights is a broader vision about technology's role in human progress.
Naveen Rao discovered artificial intelligence through science fiction in middle school, devouring the works of Asimov while his peers in Kentucky were doing whatever kids in Kentucky normally did. Growing up in a family of doctors, he chose a less conventional path, following his early interests in programming and circuit building.
After establishing himself as an engineer, he stepped away to pursue a neuroscience PhD at Brown, finishing in record time. This combination of engineering expertise and biological understanding shaped his approach to AI hardware development at his companies Nervana (acquired by Intel) and MosaicML (acquired by Databricks).
In this conversation, the VP of AI at Databricks breaks down what's actually needed for meaningful progress in machine reasoning (hint: it's not just bigger models), and why deep tech development needs a different playbook than what we're used to.
Tristan Handy’s move to develop dbt Labs meant stepping away from steady consulting revenue to build a product. With support from an open-source community and interest from enterprise clients, dbt quickly found a growing user base. By 2019, it had become a go-to tool for data professionals looking to streamline workflows and prepare data for AI. Today, dbt Labs serves over 50,000 teams globally, with dbt Cloud helping organizations tackle modern analytics and AI needs through data transformation, observability, and orchestration.
In this episode of Barrchives, Tristan discusses how dbt is adapting to support the evolving demands of today’s data ecosystems. He shares insights on how data teams can move beyond manual, repetitive tasks to create environments where data becomes a valuable, collaborative asset. From AI’s potential as an analytical “thought partner” to the emerging standards reshaping data access, Tristan explores the shifts making data infrastructure more adaptable and effective.
Cristóbal Valenzuela, CEO and co-founder of Runway, is using AI to open up new pathways of expressing creativity.
Cristóbal founded Runway with the idea that AI could expand the boundaries of what is possible in storytelling. This vision transformed the company from a model marketplace into a unique toolkit for artists, filmmakers, and creators that brings their visions to life.




