Discover
The Neuron: AI Explained
The Neuron: AI Explained
Author: The Neuron
Subscribed: 562Played: 5,071Subscribe
Share
© The Neuron
Description
The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube.
Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
75 Episodes
Reverse
Steve Brown's house burned down in a wildfire—and accidentally saved his life. When doctors missed his aggressive blood cancer for over a year, Steve built a swarm of AI agents that diagnosed it in minutes and helped design his treatment. Now he's turning that breakthrough into CureWise, a precision oncology platform helping cancer patients become better advocates. We explore agentic medicine, AI safety in healthcare, and how swarms of specialized AI agents are changing cancer care from diagnosis to treatment selection.🔗 Get on the CureWise waitlist: https://curewise.com📧 Subscribe to The Neuron newsletter: https://theneuron.ai
Most businesses don't buy their AI services directly from OpenAI or Google—they buy it through a massive, invisible distribution network called "the channel." Victoria Durgin and Katie Bavoso of Channel Insider join Corey and Grant to explain how this hidden industry works, why AI is shaking it up unlike anything before, and what it means for businesses trying to adopt AI in 2026.Subscribe to The Neuron newsletter: https://theneuron.aiChannel Insider: https://channelinsider.com
Nick Heiner leads RL environment development at Surge AI, the bootstrapped company that hit $1.2B in revenue training models for OpenAI, Anthropic, Meta, and Google. In this episode, we break down reinforcement learning environments—the secret training grounds where AI agents learn to actually do work. Nick shares why even the best models fail 40% of real workplace tasks, what happened when 200 Wall Street experts graded GPT-5 and Claude, and his prediction that a $1B company with one human employee could exist by 2030.A Special Thank You To Our Sponsor For This Video: Dell AI Factory with NVIDIA. Learn more at https://dell.com/yourwaytoaiResources: • Surge AI Research – Hierarchy of Agentic Capabilities: https://arxiv.org/abs/2601.09032 • Surge AI Blog: https://surgehq.ai/blog • Nick's Sonnet 4.5 Review: https://surgehq.ai/blog/sonnet-4-5-product-take • Nick’s Substack: https://nickheiner.substack.com/ • SurgeHQ’s enterprisebench: https://surgehq.ai/blog/enterprisebench-corecraft • Nick’s hilarious Gemini 3.1 review: https://nickheiner.substack.com/p/gemini-31-pro-not-leading-edge-also • Hemingway-bench AI Writing Leaderboard https://surgehq.ai/blog/hemingway-bench-ai-writing-leaderboard • LMArena is a cancer on AI: https://surgehq.ai/blog/lmarena-is-a-plague-on-ai • Dell AI Factory with NVIDIA: https://dell.com/yourwaytoaiSubscribe to The Neuron newsletter: https://theneuron.ai
Proton—the company behind the world's largest encrypted email service with 100M+ users—just launched Lumo, a privacy-first AI assistant. We sit down with Eamonn Maguire, who leads Proton's ML team and built Lumo from the ground up. Eamonn has a PhD from Oxford and a postdoc at CERN, and he breaks down how Lumo's encryption actually works, why Big Tech's business model prevents them from building private AI, the real privacy threats hiding inside viral AI trends like Ghibli-fication, and whether AI agents are safe to connect to your bank account. Listeners will learn how encrypted AI handles your data differently, what open-source models power Lumo, and why "set-and-forget" agents are still more hype than reality.🔗 Links & Resources:Lumo by Proton: https://lumo.proton.me Proton: https://proton.me Lumo 1.3 (Projects): https://proton.me/blog/lumo-1-3 Lumo for Business: https://proton.me/blog/lumo-business Proton Sheets: https://proton.me/blog/sheets-proton-drive CLI for Proton Pass: https://proton.me/blog/proton-pass-cli Reserve your child's email: https://proton.me/mail/born-private/email Subscribe to The Neuron newsletter: https://theneuron.ai
Carta CMO Nicole Baer joins Corey and Grant to break down the real state of startups in 2026. With half of all venture funding now flowing to AI-native companies and seed deals at a six-year low, the startup playbook has fundamentally changed. Nicole shares Carta’s data on solo founders, the new billion-dollar timeline, why the Bay Area’s grip is tighter than ever, and how AI is reshaping everything from marketing to fund administration.Carta State of Startups 2025 Report: https://carta.com/blog/state-of-startups-2025/Carta Data & Insights (free): https://carta.com/data/Subscribe to The Neuron newsletter: https://theneuron.ai
Recorded live at NVIDIA GTC 2026 in San Jose, Corey sits down with returning guest Kari Briski—VP of Generative AI Software for Enterprise at NVIDIA—to unpack their biggest open-source model yet: Nemotron 3 Super. Kari breaks down why a 120B-parameter model runs as fast as a 12B one, how multi-agent systems are going from science fiction to production, and why Jensen Huang is calling this "a new operating system." We also dig into NVIDIA's work on Open Claw security, the 35x explosion in open-model token generation, and where omni-modal AI is heading next.Subscribe to The Neuron newsletter: https://theneuron.aiRelevant links:NVIDIA Build (try Nemotron): https://build.nvidia.comNemotron on Hugging Face: https://huggingface.co/nvidiaOpen Router: https://openrouter.aiKari's previous Neuron episode (Oct 2025): https://youtu.be/p0INn_w7TYo
Scientific discovery has always been slow. Until now.In this episode, we sit down with Dr. Qichao Hu, CEO of SES AI, to reveal how they are using AI agents to turn a 8-year research cycle into a 2-week sprint. By combining autonomous "wet labs" with advanced AI models, they are solving one of the hardest physics problems in tech: the battery bottleneck.We dive deep into how this "Molecular Universe" project isn't just about EV batteries—it's about unlocking power for data centers, robotics, and AR glasses. If you want to see a concrete example of AI agents working in the physical world to solve material science constraints, do not miss this conversation.🔗 Learn more about SES AI: https://www.ses.ai/🔗 Follow the Molecular Universe project: https://molecular-universe.com/aboutSubscribe for more interviews with the people building AI’s next wave.For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
In this episode, we sit down with Yaron Inger, co-founder of Lightricks and LTX, to explore the future of open-source AI video.LTX-2 is currently the #1 ranked open-source audio & video model on Hugging Face — with over 4.5 million downloads in just two months.But what makes it different?It runs locally.It can be fine-tuned on your own IP.It integrates into real video workflows.And it might change how filmmaking, education, and creative work evolve in the AI era.We talk about:• Why open models are catching up to Big Tech• How smaller models are getting better through distillation• Running AI video on consumer GPUs• Infinite, autoregressive video generation• AI teachers that change environments in real time• Whether AI will replace filmmakers — or empower themIf you care about the future of creativity, open AI, or the economics of filmmaking… this one is worth your time.Check out LTX: https://ltx.ioLTX-2 on Hugging Face: https://huggingface.co/Lightricks/LTX-2.3 LTX Desktop Repo: https://github.com/Lightricks/LTX-DeskFor more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
You've probably used Canva—but you probably haven't seen what it can do with AI. In this episode of The Neuron, we sit down with Danny Wu, Head of AI Products at Canva, to explore how the platform went from a simple design tool to a full-blown "Creative Operating System" powered by AI—serving 230+ million users every month.Danny walks us through how Canva's MCP server lets you create fully editable designs from inside ChatGPT, Claude, and Microsoft Copilot, why their new Canva Design Model is fundamentally different from typical AI image generators (hint: layers), and why 24 billion AI tool uses later, the most surprising use cases are ones they never anticipated.We also get Danny's take on whether AI will homogenize all design, his advice for freelancers who don't want to get replaced, and a live demo of Canva's AI design generation in action.You'll learn:• How MCP powers Canva inside ChatGPT, Claude, and Copilot• What the Canva Design Model understands that GPT-4 doesn't• Why editable layers (not flat images) are the real AI design breakthrough• Danny's advice for freelancers to become irreplaceable in an AI world• How Canva uses AI internally on tens of millions of lines of code• Why AI assistants are becoming "the new SEO" for user acquisitionTry Canva AI at https://canva.com/aiSpecial thanks to the sponsor of this video, Cohesity: https://www.cohesity.com/ResilienceEverywhere/?utm_source=brand-ta-podcast&utm_medium=direct-publisher&utm_campaign=fy26-q2-01-amer-us-digital-awarewbpg-brd-genbr&utm_content=podcastFor more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai.
AI data centers are going to double their power consumption by 2030—so where's all that energy coming from? One answer is fusion, the same process that powers the sun.In this episode of The Neuron, we're joined by Brandon Sorbom, Chief Science Officer and Co-founder of Commonwealth Fusion Systems, to explore how his company is racing to build the world's first commercial fusion power plant—and how AI is helping them get there faster.Brandon explains why fusion has been "30 years away" for decades, what changed with high-temperature superconducting magnets, and why fusion is fundamentally safer than fission (hint: fusion is "default off"). We dive into CFS's collaborations with Google DeepMind and NVIDIA, what it takes to wrangle 10,000 unique parts, and when we might actually see fusion on the grid.You'll learn:• What fusion actually is (and why it's not nuclear fission)• Why high-temperature superconducting magnets changed everything• How AI is accelerating plasma control and simulation• The safety profile that makes fusion regulated like an MRI, not a reactor• When CFS expects to hit Q > 1 (net energy) and beyondTo learn more about Commonwealth Fusion Systems, visit https://cfs.energy.For more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai
Diffusion models changed how we generate images and video—now they’re coming for text.In this episode, we sit down with Stefano Ermon, Stanford computer science professor and founder of Inception Labs, to unpack how diffusion works for language, why it can generate in parallel (instead of token-by-token), and what that means for latency, cost, and real-time AI products.We talk through:The simplest mental model for diffusion: generate a full draft, then refine it by “fixing mistakes”Why today’s autoregressive LLM inference is often memory-bound—and why diffusion can shift it toward a more GPU-friendly compute profileWhere Mercury wins today (IDEs, voice/real-time agents, customer support, EdTech—anywhere humans can’t wait)What changes (and what doesn’t) for long context and architecture choicesThe real-world way to evaluate models in production: offline evals + the gold-standard A/B testStefano also shares what’s next on Mercury’s roadmap—especially around stronger planning and reasoning for agentic use cases.Try Mercury + learn more: inceptionlabs.aiFor more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
Customer service is one of the industries most impacted by AI — but what if AI alone isn’t the answer?In this episode of The Neuron Podcast, Grant Harvey and Corey Noles sit down with Matt Price, Founder & CEO of Crescendo, to explore how AI and humans working together can outperform automation alone. After spending 13+ years at Zendesk, Matt is now building an AI-native customer experience platform that automates up to 90% of tickets with 99.8% accuracy — without sacrificing empathy, trust, or outcomes.We cover: • Why LLMs are the biggest shift in customer service since the telephone • Why bolting AI onto old CX workflows fails • How Crescendo’s multimodal AI can chat, talk, see images, and control devices in one conversation • Real-world examples (like smart sprinkler troubleshooting via voice + vision + APIs) • Why Crescendo combines AI agents with forward-deployed human experts • How outcome-based pricing aligns incentives around real customer satisfaction • How AI is reshaping (not eliminating) customer service jobs • Why “deflection” is the wrong mindset for CX — and what replaces it • What customer support roles look like in an AI-native futureThis is a deep dive into the next generation of customer experience, where AI handles scale and speed — and humans deliver judgment, empathy, and innovation.Subscribe for weekly conversations with the builders shaping the future of AI and work.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
Taylor Mullen, Principal Engineer at Google and creator of Gemini CLI, reveals how his team ships 100-150 features and bug fixes every week—using Gemini CLI to build itself. In this first in-depth interview about Gemini CLI's origin story, we explore why command-line AI agents are having a "terminal renaissance," how Taylor manages swarms of parallel AI agents, and the techniques (like the viral "Ralph Wiggum" method) that separate 10x engineers from 100x engineers. Whether you're a developer or AI-curious, you'll learn practical strategies for using AI coding tools more effectively.🔗 Links:• Gemini CLI: https://geminicli.com• GitHub: https://github.com/google-gemini/gemini-cli• Subscribe to The Neuron newsletter: https://theneuron.ai
Modern AI has been dominated by one idea: predict the next token. But what if intelligence doesn’t have to work that way?In this episode of The Neuron, we’re joined by Eve Bodnia, Founder and CEO of Logical Intelligence, to explore energy-based models (EBMs)—a radically different approach to AI reasoning that doesn’t rely on language, tokens, or next-word prediction.With a background in theoretical physics and quantum information, Eve explains how EBMs operate over an energy landscape, allowing models to reason about many possible solutions at once rather than guessing sequentially. We discuss why this matters for tasks like spatial reasoning, planning, robotics, and safety-critical systems—and where large language models begin to show their limits.You’ll learn:What energy-based models are (in plain English)Why token-free architectures change how AI reasonsHow EBMs reduce hallucinations through constraints and verificationWhy EBMs and LLMs may work best together, not in competitionWhat this approach reveals about the future of AI systemsTo learn more about Eve’s work, visit https://logicalintelligence.com.For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
In this special episode, we go hands-on with three cutting-edge AI tools from Google Labs. First, Jaclyn Konzelman (Director of Product Management) demos Mixboard, an AI-powered concepting board that transforms ideas into visual presentations using Nano Banana Pro. Then, Thomas Iljic (Senior Director of Product Management) shows us Flow, Google's AI filmmaking tool that lets you create, edit, and animate video clips with unprecedented control. Finally, Megan Li (Senior Product Manager) walks us through Opal, a no-code AI app builder that lets anyone create custom AI workflows and mini-apps using natural language.Subscribe to The Neuron newsletter: https://theneuron.aiLinks:Mixboard: https://mixboard.google.com Flow: https://flow.google Opal: https://opal.google Google Labs: https://labs.google
Autonomous coding agents are moving from demos to real production workflows. In this episode, Factory AI co-founder and CTO Eno Reyes explains what "Droids" really are—fully autonomous agents that can take tickets, modify real codebases, run tests, and work inside existing dev workflows.We dig into Factory's context compression research (which outperformed both OpenAI and Anthropic), what makes a codebase "agent-ready," and why Stanford research found that the ONLY predictor of AI success was codebase quality—not adoption rates or token usage.Whether you're a developer curious about autonomous coding tools or just want to understand where AI engineering is headed, this episode is packed with practical insights.🔗 Try Factory AI: https://factory.ai📰 Subscribe to The Neuron newsletter: https://theneuron.ai📖 Resources mentioned:• Factory's compression research: https://factory.ai/news/evaluating-compression
AI reasoning models don’t just give answers — they plan, deliberate, and sometimes try to cheat.In this episode of The Neuron, we’re joined by Bowen Baker, Research Scientist at OpenAI, to explore whether we can monitor AI reasoning before things go wrong — and why that transparency may not last forever.Bowen walks us through real examples of AI reward hacking, explains why monitoring chain-of-thought is often more effective than checking outputs, and introduces the idea of a “monitorability tax” — trading raw performance for safety and transparency.We also cover:Why smaller models thinking longer can be safer than bigger modelsHow AI systems learn to hide misbehaviorWhy suppressing “bad thoughts” can backfireThe limits of chain-of-thought monitoringBowen’s personal view on open-source AI and safety risksIf you care about how AI actually works — and what could go wrong — this conversation is essential.Resources: Title URLEvaluating chain-of-thought monitorability | OpenAI https://openai.com/index/evaluating-chain-of-thought-monitorability/Understanding neural networks through sparse circuits | OpenAI https://openai.com/index/understanding-neural-networks-through-sparse-circuits/OpenAI's alignment blog: https://alignment.openai.com/👉 Subscribe for more interviews with the people building AI 👉 Join the newsletter at https://theneuron.ai
Everyone is rushing to build AI agents — but most companies are setting themselves up for failure.In this episode of The Neuron, Darin Patterson, VP of Market Strategy at Make, explains why agentic AI only works if your automation foundation is solid first. We break down when to use deterministic workflows vs AI agents, how to avoid fragile automation sprawl, and why visibility into your entire automation landscape is now mission-critical.You’ll see real examples of building agents in Make, how Model Context Protocol (MCP) fits into modern workflows, and why orchestration — not hype — is the real unlock for scaling AI safely inside organizations.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
IBM just released Granite 4.0, a new family of open language models designed to be fast, memory-efficient, and enterprise-ready — and it represents a very different philosophy from today’s frontier AI race.In this episode of The Neuron, IBM Research’s David Cox joins us to unpack why IBM treats AI models as tools rather than entities, how hybrid architectures dramatically reduce memory and cost, and why openness, transparency, and external audits matter more than ever for real-world deployment.We dive into long-context efficiency, agent safety, LoRA adapters, on-device AI, voice interfaces, and why the future of AI may look a lot more boring — in the best possible way.If you’re building AI systems for production, agents, or enterprise workflows, this conversation is required listening.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
Imagine an AI that doesn’t just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world’s first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway’s architecture introduces true temporal reasoning and continual learning. We explore:• Why Transformers lack real memory and time awareness • How BDH uses brain-like neurons, synapses, and emergent structure • How models can “get bored,” adapt, and strengthen connections • Why Pathway sees reasoning — not language — as the core of intelligence • How BDH enables infinite context, live learning, and interpretability • Why gluing two trained models together actually works in BDH • The path to AGI through generalization, not scaling • Real-world early adopters (Formula 1, NATO, French Postal Service) • Safety, reversibility, checkpointing, and building predictable behavior • Why this architecture could power the next era of scientific innovationFrom brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.If you want a window into what comes after LLMs, this interview is essential.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai





a bit disappointing this episode, the title 8s mot representative of what they talk about. they simply narrate their building a game using vibe coding. they don't talk about how to use ai to learn to code, other than telling people to pay for a subscription because the model will be better. hm.
Musk left Open AI because Tesla was creating it's own AI and HE didn't want to cause a conflict of interest??!? According to the board, founders and others close to Open AI, Musk wanted to become CEO and roll the company into Tesla, which no one else wanted. Seeing that Musk has pulled similar moves in the past (how many people know the names of the 2 actual founders of Tesla, whom Musk fired after replacing the former CEO with himself?) I don't blame them. Truth is important & relative.