DiscoverThe Neuron: AI Explained
The Neuron: AI Explained

The Neuron: AI Explained

Author: The Neuron

Subscribed: 555Played: 4,753
Share

Description

The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube.

Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
65 Episodes
Reverse
Steve Brown's house burned down in a wildfire—and accidentally saved his life. When doctors missed his aggressive blood cancer for over a year, Steve built a swarm of AI agents that diagnosed it in minutes and helped design his treatment. Now he's turning that breakthrough into CureWise, a precision oncology platform helping cancer patients become better advocates. We explore agentic medicine, AI safety in healthcare, and how swarms of specialized AI agents are changing cancer care from diagnosis to treatment selection.🔗 Get on the CureWise waitlist: https://curewise.com📧 Subscribe to The Neuron newsletter: https://theneuron.ai
AI data centers are going to double their power consumption by 2030—so where's all that energy coming from? One answer is fusion, the same process that powers the sun.In this episode of The Neuron, we're joined by Brandon Sorbom, Chief Science Officer and Co-founder of Commonwealth Fusion Systems, to explore how his company is racing to build the world's first commercial fusion power plant—and how AI is helping them get there faster.Brandon explains why fusion has been "30 years away" for decades, what changed with high-temperature superconducting magnets, and why fusion is fundamentally safer than fission (hint: fusion is "default off"). We dive into CFS's collaborations with Google DeepMind and NVIDIA, what it takes to wrangle 10,000 unique parts, and when we might actually see fusion on the grid.You'll learn:• What fusion actually is (and why it's not nuclear fission)• Why high-temperature superconducting magnets changed everything• How AI is accelerating plasma control and simulation• The safety profile that makes fusion regulated like an MRI, not a reactor• When CFS expects to hit Q > 1 (net energy) and beyondTo learn more about Commonwealth Fusion Systems, visit https://cfs.energy.For more practical, grounded conversations on AI and emerging tech, subscribe to The Neuron newsletter at https://theneuron.ai
Diffusion models changed how we generate images and video—now they’re coming for text.In this episode, we sit down with Stefano Ermon, Stanford computer science professor and founder of Inception Labs, to unpack how diffusion works for language, why it can generate in parallel (instead of token-by-token), and what that means for latency, cost, and real-time AI products.We talk through:The simplest mental model for diffusion: generate a full draft, then refine it by “fixing mistakes”Why today’s autoregressive LLM inference is often memory-bound—and why diffusion can shift it toward a more GPU-friendly compute profileWhere Mercury wins today (IDEs, voice/real-time agents, customer support, EdTech—anywhere humans can’t wait)What changes (and what doesn’t) for long context and architecture choicesThe real-world way to evaluate models in production: offline evals + the gold-standard A/B testStefano also shares what’s next on Mercury’s roadmap—especially around stronger planning and reasoning for agentic use cases.Try Mercury + learn more: inceptionlabs.aiFor more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
Customer service is one of the industries most impacted by AI — but what if AI alone isn’t the answer?In this episode of The Neuron Podcast, Grant Harvey and Corey Noles sit down with Matt Price, Founder & CEO of Crescendo, to explore how AI and humans working together can outperform automation alone. After spending 13+ years at Zendesk, Matt is now building an AI-native customer experience platform that automates up to 90% of tickets with 99.8% accuracy — without sacrificing empathy, trust, or outcomes.We cover: • Why LLMs are the biggest shift in customer service since the telephone • Why bolting AI onto old CX workflows fails • How Crescendo’s multimodal AI can chat, talk, see images, and control devices in one conversation • Real-world examples (like smart sprinkler troubleshooting via voice + vision + APIs) • Why Crescendo combines AI agents with forward-deployed human experts • How outcome-based pricing aligns incentives around real customer satisfaction • How AI is reshaping (not eliminating) customer service jobs • Why “deflection” is the wrong mindset for CX — and what replaces it • What customer support roles look like in an AI-native futureThis is a deep dive into the next generation of customer experience, where AI handles scale and speed — and humans deliver judgment, empathy, and innovation.Subscribe for weekly conversations with the builders shaping the future of AI and work.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
Taylor Mullen, Principal Engineer at Google and creator of Gemini CLI, reveals how his team ships 100-150 features and bug fixes every week—using Gemini CLI to build itself. In this first in-depth interview about Gemini CLI's origin story, we explore why command-line AI agents are having a "terminal renaissance," how Taylor manages swarms of parallel AI agents, and the techniques (like the viral "Ralph Wiggum" method) that separate 10x engineers from 100x engineers. Whether you're a developer or AI-curious, you'll learn practical strategies for using AI coding tools more effectively.🔗 Links:• Gemini CLI: https://geminicli.com• GitHub: https://github.com/google-gemini/gemini-cli• Subscribe to The Neuron newsletter: https://theneuron.ai
Modern AI has been dominated by one idea: predict the next token. But what if intelligence doesn’t have to work that way?In this episode of The Neuron, we’re joined by Eve Bodnia, Founder and CEO of Logical Intelligence, to explore energy-based models (EBMs)—a radically different approach to AI reasoning that doesn’t rely on language, tokens, or next-word prediction.With a background in theoretical physics and quantum information, Eve explains how EBMs operate over an energy landscape, allowing models to reason about many possible solutions at once rather than guessing sequentially. We discuss why this matters for tasks like spatial reasoning, planning, robotics, and safety-critical systems—and where large language models begin to show their limits.You’ll learn:What energy-based models are (in plain English)Why token-free architectures change how AI reasonsHow EBMs reduce hallucinations through constraints and verificationWhy EBMs and LLMs may work best together, not in competitionWhat this approach reveals about the future of AI systemsTo learn more about Eve’s work, visit https://logicalintelligence.com.For more practical, grounded conversations on AI systems that actually work, subscribe to The Neuron newsletter at https://theneuron.ai.
In this special episode, we go hands-on with three cutting-edge AI tools from Google Labs. First, Jaclyn Konzelman (Director of Product Management) demos Mixboard, an AI-powered concepting board that transforms ideas into visual presentations using Nano Banana Pro. Then, Thomas Iljic (Senior Director of Product Management) shows us Flow, Google's AI filmmaking tool that lets you create, edit, and animate video clips with unprecedented control. Finally, Megan Li (Senior Product Manager) walks us through Opal, a no-code AI app builder that lets anyone create custom AI workflows and mini-apps using natural language.Subscribe to The Neuron newsletter: https://theneuron.aiLinks:Mixboard: https://mixboard.google.com Flow: https://flow.google Opal: https://opal.google Google Labs: https://labs.google 
Autonomous coding agents are moving from demos to real production workflows. In this episode, Factory AI co-founder and CTO Eno Reyes explains what "Droids" really are—fully autonomous agents that can take tickets, modify real codebases, run tests, and work inside existing dev workflows.We dig into Factory's context compression research (which outperformed both OpenAI and Anthropic), what makes a codebase "agent-ready," and why Stanford research found that the ONLY predictor of AI success was codebase quality—not adoption rates or token usage.Whether you're a developer curious about autonomous coding tools or just want to understand where AI engineering is headed, this episode is packed with practical insights.🔗 Try Factory AI: https://factory.ai📰 Subscribe to The Neuron newsletter: https://theneuron.ai📖 Resources mentioned:• Factory's compression research: https://factory.ai/news/evaluating-compression
AI reasoning models don’t just give answers — they plan, deliberate, and sometimes try to cheat.In this episode of The Neuron, we’re joined by Bowen Baker, Research Scientist at OpenAI, to explore whether we can monitor AI reasoning before things go wrong — and why that transparency may not last forever.Bowen walks us through real examples of AI reward hacking, explains why monitoring chain-of-thought is often more effective than checking outputs, and introduces the idea of a “monitorability tax” — trading raw performance for safety and transparency.We also cover:Why smaller models thinking longer can be safer than bigger modelsHow AI systems learn to hide misbehaviorWhy suppressing “bad thoughts” can backfireThe limits of chain-of-thought monitoringBowen’s personal view on open-source AI and safety risksIf you care about how AI actually works — and what could go wrong — this conversation is essential.Resources: Title URLEvaluating chain-of-thought monitorability | OpenAI https://openai.com/index/evaluating-chain-of-thought-monitorability/Understanding neural networks through sparse circuits | OpenAI https://openai.com/index/understanding-neural-networks-through-sparse-circuits/OpenAI's alignment blog: https://alignment.openai.com/👉 Subscribe for more interviews with the people building AI 👉 Join the newsletter at https://theneuron.ai
Everyone is rushing to build AI agents — but most companies are setting themselves up for failure.In this episode of The Neuron, Darin Patterson, VP of Market Strategy at Make, explains why agentic AI only works if your automation foundation is solid first. We break down when to use deterministic workflows vs AI agents, how to avoid fragile automation sprawl, and why visibility into your entire automation landscape is now mission-critical.You’ll see real examples of building agents in Make, how Model Context Protocol (MCP) fits into modern workflows, and why orchestration — not hype — is the real unlock for scaling AI safely inside organizations.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
IBM just released Granite 4.0, a new family of open language models designed to be fast, memory-efficient, and enterprise-ready — and it represents a very different philosophy from today’s frontier AI race.In this episode of The Neuron, IBM Research’s David Cox joins us to unpack why IBM treats AI models as tools rather than entities, how hybrid architectures dramatically reduce memory and cost, and why openness, transparency, and external audits matter more than ever for real-world deployment.We dive into long-context efficiency, agent safety, LoRA adapters, on-device AI, voice interfaces, and why the future of AI may look a lot more boring — in the best possible way.If you’re building AI systems for production, agents, or enterprise workflows, this conversation is required listening.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
Imagine an AI that doesn’t just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world’s first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway’s architecture introduces true temporal reasoning and continual learning. We explore:• Why Transformers lack real memory and time awareness • How BDH uses brain-like neurons, synapses, and emergent structure • How models can “get bored,” adapt, and strengthen connections • Why Pathway sees reasoning — not language — as the core of intelligence • How BDH enables infinite context, live learning, and interpretability • Why gluing two trained models together actually works in BDH • The path to AGI through generalization, not scaling • Real-world early adopters (Formula 1, NATO, French Postal Service) • Safety, reversibility, checkpointing, and building predictable behavior • Why this architecture could power the next era of scientific innovationFrom brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.If you want a window into what comes after LLMs, this interview is essential.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
Carina Hong dropped out of Stanford's PhD program to build "mathematical superintelligence" — and just raised $64M to do it. In this episode, we explore what that actually means: an AI that doesn't just solve math problems but discovers new theorems, proves them formally, and gets smarter with each iteration. Carina explains how her team solved a 130-year-old problem about Lyapunov functions, disproved a 30-year-old graph theory conjecture, and why math is the secret "bedrock" for everything from chip design to quant trading to coding agents. We also discuss the fascinating connections between neuroscience, AI, and mathematics.Lean more about Axiom: https://axiommath.ai/ Subscribe to The Neuron newsletter: https://theneuron.ai
Nick Talken started a 3D printing materials company in a trailer lab in his co-founder's backyard, sold it to a 145-year-old German chemical giant, then spun out an AI platform that's now transforming R&D for Fortune 100 companies. Albert Invent's foundational AI model—trained on 15 million molecular structures—is helping scientists at companies like Kenvue (maker of Tylenol, Neutrogena, and Listerine) compress projects from 3 months to 2 days. We dig into how enterprises train bespoke AI models on proprietary data, why you can't just use ChatGPT for chemistry, and what becomes possible when AI can "think like a chemist."Subscribe to The Neuron newsletter: https://theneuron.aiAlbert Invent website: https://www.albertinvent.comKenvue partnership announcement: https://www.businesswire.com/news/home/20251014240355/en/
Most enterprise knowledge is trapped in meetings—and then lost forever. Otter.ai CEO Sam Liang explains how his company turned meeting transcription into a $100M+ revenue business by solving a problem most companies don't even realize they have.In this episode, we cover:- Why meetings are your company's most expensive activity (and how to measure ROI on them)- Building a "meeting-centric knowledge base" that captures voice data other systems miss- How Otter organizes enterprise knowledge like Slack—but for spoken conversations- Real-time sales coaching that feeds reps answers during customer calls- AI avatars that attend meetings on your behalf (and ask questions for you)- The technical challenges of understanding dialects, tone, and context in voice AI- How one financial company used Otter to onboard new clients instantly with full conversation history- Privacy vs. utility: designing permission systems for meeting data- The future of active AI agents that contribute to meetings, not just transcribe themSam previously worked on the blue dot location platform for Google Maps and now runs a company that's transcribed over 1 billion meetings. If you're thinking about how AI can actually improve enterprise workflows (not just automate busywork), this conversation is packed with specific, tactical insights.A special thank you to this episode's sponsor, SAS: https://www.sas.com/en/whitepapers/how-aiot-is-reshaping-industrial-efficiency-security-and-decision-making.html?utm_source=other&utm_medium=cpm&utm_campaign=-globalResources mentioned:• Otter.ai $100M ARR announcement: https://otter.ai/blog/otter-ai-breaks-100m-arr-barrier-and-transforms-business-meetings-launching-industry-first-ai-meeting-agent-suite• HIPAA compliance: https://otter.ai/blog/otter-ai-achieves-hipaa-compliance• Otter.ai: https://otter.aiSubscribe to The Neuron newsletter: https://theneuron.ai➤ CHAPTERS0:00 - Introduction & Sam's Background1:16 - From Meeting Notes to Enterprise Knowledge04:48 - Building a Meeting-Centric Knowledge Base06:14 - Why Meetings Are Your Most Expensive Activity05:40 - Solving Information Silos with AI07:56 - A Message from our Sponsor SAS9:11 - Meeting Transcriptions Alone Aren't the Answer17:34 - Leader Dashboards & AI Workflows18:49 - AI Avatars: Send Your Digital Self to Meetings21:45 - Active AI Agents That Talk Back23:13 - Privacy, Permissions & Corporate Culture26:08 - Technical Challenges: Understanding Context & Tone34:37 - Privacy vs. Utility Trade-offs37:25 - The Future of Meetings in 202739:27 - Competing with Microsoft & Google43:02 - How Otter Generated Over $1 Billion in Customer ROI46:05 - What Excites & Concerns Sam About AI49:09 - Security Risks of AI Avatars49:50 - Final Thoughts on the Future of AI at WorkHosted by: Corey Noles and Grant HarveyGuest: Sam Liang, Co-founder & CEO, Otter.aiPublished by: Manique SantosEdited by: Kush Felisilda
In this episode, we sit down with Pavan Davuluri, Corporate Vice President of Microsoft's Windows + Devices business, to explore how Windows is evolving into an AI-native platform. Pavan leads the team responsible for strategy, design, and delivery of Windows products across the full stack - from silicon and devices to platform, OS, apps, experiences, security, and cloud. With 23 years at Microsoft, he's driven the creation of the Surface line and now oversees how hardware and software fuse together with AI at the center. We explore how Copilot is being deeply integrated into Windows, the engineering shifts required to make Windows a more proactive and intelligent platform, and how Microsoft balances powerful automation with user control. From Surface design standards influencing the broader ecosystem to supporting OEM partners in the AI PC era, Pavan reveals the principles guiding Windows' transformation and what the computing experience will look like in the next five years.Subscribe to The Neuron newsletter: https://theneuron.aiMicrosoft Surface: https://www.microsoft.com/surfaceWindows AI features: https://www.microsoft.com/windows/ai-features
While everyone obsesses over which AI model is smartest, a quiet revolution is happening in the infrastructure layer underneath. Modular just raised $250M at a $1.6B valuation to solve a problem most people don't know exists: AI is locked into expensive, vendor-specific hardware ecosystems. Tim Davis, Co-Founder & President of Modular, joins us to explain why his company is building the "hypervisor for AI"—making it possible to write code once and run it on any GPU, from NVIDIA to AMD to Apple Silicon. We dive into why this matters for businesses, what the Android analogy really means, how companies are seeing 70-80% cost reductions, and whether we're even on the right path to superintelligence.Subscribe to The Neuron newsletter: https://theneuron.aiTry Modular: https://modular.comGetting Started Guide: https://modular.com/get-started
In this episode, we sit down with Scott Guthrie, EVP of Microsoft's Cloud + AI Group, to explore the architecture behind Azure's AI Superfactory. Scott oversees Microsoft's hyperscale cloud computing solutions including Azure, generative AI platforms, and next-generation infrastructure. We dive into Microsoft's strategic approach to AI datacenter buildout, the innovative Fairwater architecture with its 120,000+ fiber miles of AI WAN backbone, and how Microsoft is balancing performance, sustainability, and cost at planet-scale. From dense GPU clusters drawing 140kW per rack to closed-loop liquid cooling systems, Scott reveals the engineering trade-offs behind infrastructure that powers frontier AI models with trillions of parameters. Whether you're an enterprise leader planning AI adoption or a developer curious about cloud architecture, you'll leave understanding how Microsoft is executing on next-gen infrastructure that transforms global challenges into opportunities.Subscribe to The Neuron newsletter: https://theneuron.ai
Retool CEO David Hsu reveals that 48% of non-engineers are now shipping software. We explore how AI is democratizing software development, why engineers might stop coding internal apps within 18-24 months, and what this means for the future of work. David shares insights from Retool's survey of 10,000+ companies, Retool’s new AppGen program, and how "tomorrow's developers" are using AI to build real production applications on enterprise data.Subscribe to The Neuron newsletter: https://theneuron.aiLearn more about Retool: https://retool.com
Computers can see and hear, but they've never been able to smell—until now. In this episode, we sit down with Alex Wiltschko, Founder & CEO of Osmo, to explore how his company is using AI to digitize scent. Alex walks us through how they "teleported" the smell of a fresh plum across their lab, created the world's first AI-designed fragrance molecules, and built Osmo Studio—a platform that lets anyone design custom fragrances in one week instead of two years. We discuss the read/map/write framework for digitizing smell, why scent is tied directly to memory and emotion, and how this technology could eventually detect diseases like cancer and Parkinson's earlier than any current diagnostic. Plus: what does the Museum of Pop Culture smell like, and can AI really create a fragrance from a Bon Iver song?Links: Osmo: https://www.osmo.aiScent Teleportation Update: https://www.osmo.ai/blog/update-scent-teleportation-we-did-itOsmo Studio: https://osmostudios.ai/ Subscribe to The Neuron newsletter: https://theneuron.aiCheck out the sponsor of this video, Flora: https://dub.florafauna.ai/neuronSubscribe to The Neuron newsletter: https://theneuron.aiHosted by: Corey Noles and Grant HarveyGuest: Alex Wiltschko (Founder & CEO, Osmo)Published by: Manique SantosEdited by: Kush Felisilda
loading
Comments (1)

Bert Fegg

Musk left Open AI because Tesla was creating it's own AI and HE didn't want to cause a conflict of interest??!? According to the board, founders and others close to Open AI, Musk wanted to become CEO and roll the company into Tesla, which no one else wanted. Seeing that Musk has pulled similar moves in the past (how many people know the names of the 2 actual founders of Tesla, whom Musk fired after replacing the former CEO with himself?) I don't blame them. Truth is important & relative.

Apr 29th
Reply