DiscoverThe Neuron: AI Explained
The Neuron: AI Explained

The Neuron: AI Explained

Author: The Neuron

Subscribed: 549Played: 4,467
Share

Description

The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube.

Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
58 Episodes
Reverse
Steve Brown's house burned down in a wildfire—and accidentally saved his life. When doctors missed his aggressive blood cancer for over a year, Steve built a swarm of AI agents that diagnosed it in minutes and helped design his treatment. Now he's turning that breakthrough into CureWise, a precision oncology platform helping cancer patients become better advocates. We explore agentic medicine, AI safety in healthcare, and how swarms of specialized AI agents are changing cancer care from diagnosis to treatment selection.🔗 Get on the CureWise waitlist: https://curewise.com📧 Subscribe to The Neuron newsletter: https://theneuron.ai
In this special episode, we go hands-on with three cutting-edge AI tools from Google Labs. First, Jaclyn Konzelman (Director of Product Management) demos Mixboard, an AI-powered concepting board that transforms ideas into visual presentations using Nano Banana Pro. Then, Thomas Iljic (Senior Director of Product Management) shows us Flow, Google's AI filmmaking tool that lets you create, edit, and animate video clips with unprecedented control. Finally, Megan Li (Senior Product Manager) walks us through Opal, a no-code AI app builder that lets anyone create custom AI workflows and mini-apps using natural language.Subscribe to The Neuron newsletter: https://theneuron.aiLinks:Mixboard: https://mixboard.google.com Flow: https://flow.google Opal: https://opal.google Google Labs: https://labs.google 
Autonomous coding agents are moving from demos to real production workflows. In this episode, Factory AI co-founder and CTO Eno Reyes explains what "Droids" really are—fully autonomous agents that can take tickets, modify real codebases, run tests, and work inside existing dev workflows.We dig into Factory's context compression research (which outperformed both OpenAI and Anthropic), what makes a codebase "agent-ready," and why Stanford research found that the ONLY predictor of AI success was codebase quality—not adoption rates or token usage.Whether you're a developer curious about autonomous coding tools or just want to understand where AI engineering is headed, this episode is packed with practical insights.🔗 Try Factory AI: https://factory.ai📰 Subscribe to The Neuron newsletter: https://theneuron.ai📖 Resources mentioned:• Factory's compression research: https://factory.ai/news/evaluating-compression
AI reasoning models don’t just give answers — they plan, deliberate, and sometimes try to cheat.In this episode of The Neuron, we’re joined by Bowen Baker, Research Scientist at OpenAI, to explore whether we can monitor AI reasoning before things go wrong — and why that transparency may not last forever.Bowen walks us through real examples of AI reward hacking, explains why monitoring chain-of-thought is often more effective than checking outputs, and introduces the idea of a “monitorability tax” — trading raw performance for safety and transparency.We also cover:Why smaller models thinking longer can be safer than bigger modelsHow AI systems learn to hide misbehaviorWhy suppressing “bad thoughts” can backfireThe limits of chain-of-thought monitoringBowen’s personal view on open-source AI and safety risksIf you care about how AI actually works — and what could go wrong — this conversation is essential.Resources: Title URLEvaluating chain-of-thought monitorability | OpenAI https://openai.com/index/evaluating-chain-of-thought-monitorability/Understanding neural networks through sparse circuits | OpenAI https://openai.com/index/understanding-neural-networks-through-sparse-circuits/OpenAI's alignment blog: https://alignment.openai.com/👉 Subscribe for more interviews with the people building AI 👉 Join the newsletter at https://theneuron.ai
Everyone is rushing to build AI agents — but most companies are setting themselves up for failure.In this episode of The Neuron, Darin Patterson, VP of Market Strategy at Make, explains why agentic AI only works if your automation foundation is solid first. We break down when to use deterministic workflows vs AI agents, how to avoid fragile automation sprawl, and why visibility into your entire automation landscape is now mission-critical.You’ll see real examples of building agents in Make, how Model Context Protocol (MCP) fits into modern workflows, and why orchestration — not hype — is the real unlock for scaling AI safely inside organizations.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
IBM just released Granite 4.0, a new family of open language models designed to be fast, memory-efficient, and enterprise-ready — and it represents a very different philosophy from today’s frontier AI race.In this episode of The Neuron, IBM Research’s David Cox joins us to unpack why IBM treats AI models as tools rather than entities, how hybrid architectures dramatically reduce memory and cost, and why openness, transparency, and external audits matter more than ever for real-world deployment.We dive into long-context efficiency, agent safety, LoRA adapters, on-device AI, voice interfaces, and why the future of AI may look a lot more boring — in the best possible way.If you’re building AI systems for production, agents, or enterprise workflows, this conversation is required listening.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
Imagine an AI that doesn’t just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world’s first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway’s architecture introduces true temporal reasoning and continual learning. We explore:• Why Transformers lack real memory and time awareness • How BDH uses brain-like neurons, synapses, and emergent structure • How models can “get bored,” adapt, and strengthen connections • Why Pathway sees reasoning — not language — as the core of intelligence • How BDH enables infinite context, live learning, and interpretability • Why gluing two trained models together actually works in BDH • The path to AGI through generalization, not scaling • Real-world early adopters (Formula 1, NATO, French Postal Service) • Safety, reversibility, checkpointing, and building predictable behavior • Why this architecture could power the next era of scientific innovationFrom brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.If you want a window into what comes after LLMs, this interview is essential.Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
Carina Hong dropped out of Stanford's PhD program to build "mathematical superintelligence" — and just raised $64M to do it. In this episode, we explore what that actually means: an AI that doesn't just solve math problems but discovers new theorems, proves them formally, and gets smarter with each iteration. Carina explains how her team solved a 130-year-old problem about Lyapunov functions, disproved a 30-year-old graph theory conjecture, and why math is the secret "bedrock" for everything from chip design to quant trading to coding agents. We also discuss the fascinating connections between neuroscience, AI, and mathematics.Lean more about Axiom: https://axiommath.ai/ Subscribe to The Neuron newsletter: https://theneuron.ai
Nick Talken started a 3D printing materials company in a trailer lab in his co-founder's backyard, sold it to a 145-year-old German chemical giant, then spun out an AI platform that's now transforming R&D for Fortune 100 companies. Albert Invent's foundational AI model—trained on 15 million molecular structures—is helping scientists at companies like Kenvue (maker of Tylenol, Neutrogena, and Listerine) compress projects from 3 months to 2 days. We dig into how enterprises train bespoke AI models on proprietary data, why you can't just use ChatGPT for chemistry, and what becomes possible when AI can "think like a chemist."Subscribe to The Neuron newsletter: https://theneuron.aiAlbert Invent website: https://www.albertinvent.comKenvue partnership announcement: https://www.businesswire.com/news/home/20251014240355/en/
Most enterprise knowledge is trapped in meetings—and then lost forever. Otter.ai CEO Sam Liang explains how his company turned meeting transcription into a $100M+ revenue business by solving a problem most companies don't even realize they have.In this episode, we cover:- Why meetings are your company's most expensive activity (and how to measure ROI on them)- Building a "meeting-centric knowledge base" that captures voice data other systems miss- How Otter organizes enterprise knowledge like Slack—but for spoken conversations- Real-time sales coaching that feeds reps answers during customer calls- AI avatars that attend meetings on your behalf (and ask questions for you)- The technical challenges of understanding dialects, tone, and context in voice AI- How one financial company used Otter to onboard new clients instantly with full conversation history- Privacy vs. utility: designing permission systems for meeting data- The future of active AI agents that contribute to meetings, not just transcribe themSam previously worked on the blue dot location platform for Google Maps and now runs a company that's transcribed over 1 billion meetings. If you're thinking about how AI can actually improve enterprise workflows (not just automate busywork), this conversation is packed with specific, tactical insights.A special thank you to this episode's sponsor, SAS: https://www.sas.com/en/whitepapers/how-aiot-is-reshaping-industrial-efficiency-security-and-decision-making.html?utm_source=other&utm_medium=cpm&utm_campaign=-globalResources mentioned:• Otter.ai $100M ARR announcement: https://otter.ai/blog/otter-ai-breaks-100m-arr-barrier-and-transforms-business-meetings-launching-industry-first-ai-meeting-agent-suite• HIPAA compliance: https://otter.ai/blog/otter-ai-achieves-hipaa-compliance• Otter.ai: https://otter.aiSubscribe to The Neuron newsletter: https://theneuron.ai➤ CHAPTERS0:00 - Introduction & Sam's Background1:16 - From Meeting Notes to Enterprise Knowledge04:48 - Building a Meeting-Centric Knowledge Base06:14 - Why Meetings Are Your Most Expensive Activity05:40 - Solving Information Silos with AI07:56 - A Message from our Sponsor SAS9:11 - Meeting Transcriptions Alone Aren't the Answer17:34 - Leader Dashboards & AI Workflows18:49 - AI Avatars: Send Your Digital Self to Meetings21:45 - Active AI Agents That Talk Back23:13 - Privacy, Permissions & Corporate Culture26:08 - Technical Challenges: Understanding Context & Tone34:37 - Privacy vs. Utility Trade-offs37:25 - The Future of Meetings in 202739:27 - Competing with Microsoft & Google43:02 - How Otter Generated Over $1 Billion in Customer ROI46:05 - What Excites & Concerns Sam About AI49:09 - Security Risks of AI Avatars49:50 - Final Thoughts on the Future of AI at WorkHosted by: Corey Noles and Grant HarveyGuest: Sam Liang, Co-founder & CEO, Otter.aiPublished by: Manique SantosEdited by: Kush Felisilda
In this episode, we sit down with Pavan Davuluri, Corporate Vice President of Microsoft's Windows + Devices business, to explore how Windows is evolving into an AI-native platform. Pavan leads the team responsible for strategy, design, and delivery of Windows products across the full stack - from silicon and devices to platform, OS, apps, experiences, security, and cloud. With 23 years at Microsoft, he's driven the creation of the Surface line and now oversees how hardware and software fuse together with AI at the center. We explore how Copilot is being deeply integrated into Windows, the engineering shifts required to make Windows a more proactive and intelligent platform, and how Microsoft balances powerful automation with user control. From Surface design standards influencing the broader ecosystem to supporting OEM partners in the AI PC era, Pavan reveals the principles guiding Windows' transformation and what the computing experience will look like in the next five years.Subscribe to The Neuron newsletter: https://theneuron.aiMicrosoft Surface: https://www.microsoft.com/surfaceWindows AI features: https://www.microsoft.com/windows/ai-features
While everyone obsesses over which AI model is smartest, a quiet revolution is happening in the infrastructure layer underneath. Modular just raised $250M at a $1.6B valuation to solve a problem most people don't know exists: AI is locked into expensive, vendor-specific hardware ecosystems. Tim Davis, Co-Founder & President of Modular, joins us to explain why his company is building the "hypervisor for AI"—making it possible to write code once and run it on any GPU, from NVIDIA to AMD to Apple Silicon. We dive into why this matters for businesses, what the Android analogy really means, how companies are seeing 70-80% cost reductions, and whether we're even on the right path to superintelligence.Subscribe to The Neuron newsletter: https://theneuron.aiTry Modular: https://modular.comGetting Started Guide: https://modular.com/get-started
In this episode, we sit down with Scott Guthrie, EVP of Microsoft's Cloud + AI Group, to explore the architecture behind Azure's AI Superfactory. Scott oversees Microsoft's hyperscale cloud computing solutions including Azure, generative AI platforms, and next-generation infrastructure. We dive into Microsoft's strategic approach to AI datacenter buildout, the innovative Fairwater architecture with its 120,000+ fiber miles of AI WAN backbone, and how Microsoft is balancing performance, sustainability, and cost at planet-scale. From dense GPU clusters drawing 140kW per rack to closed-loop liquid cooling systems, Scott reveals the engineering trade-offs behind infrastructure that powers frontier AI models with trillions of parameters. Whether you're an enterprise leader planning AI adoption or a developer curious about cloud architecture, you'll leave understanding how Microsoft is executing on next-gen infrastructure that transforms global challenges into opportunities.Subscribe to The Neuron newsletter: https://theneuron.ai
Retool CEO David Hsu reveals that 48% of non-engineers are now shipping software. We explore how AI is democratizing software development, why engineers might stop coding internal apps within 18-24 months, and what this means for the future of work. David shares insights from Retool's survey of 10,000+ companies, Retool’s new AppGen program, and how "tomorrow's developers" are using AI to build real production applications on enterprise data.Subscribe to The Neuron newsletter: https://theneuron.aiLearn more about Retool: https://retool.com
Computers can see and hear, but they've never been able to smell—until now. In this episode, we sit down with Alex Wiltschko, Founder & CEO of Osmo, to explore how his company is using AI to digitize scent. Alex walks us through how they "teleported" the smell of a fresh plum across their lab, created the world's first AI-designed fragrance molecules, and built Osmo Studio—a platform that lets anyone design custom fragrances in one week instead of two years. We discuss the read/map/write framework for digitizing smell, why scent is tied directly to memory and emotion, and how this technology could eventually detect diseases like cancer and Parkinson's earlier than any current diagnostic. Plus: what does the Museum of Pop Culture smell like, and can AI really create a fragrance from a Bon Iver song?Links: Osmo: https://www.osmo.aiScent Teleportation Update: https://www.osmo.ai/blog/update-scent-teleportation-we-did-itOsmo Studio: https://osmostudios.ai/ Subscribe to The Neuron newsletter: https://theneuron.aiCheck out the sponsor of this video, Flora: https://dub.florafauna.ai/neuronSubscribe to The Neuron newsletter: https://theneuron.aiHosted by: Corey Noles and Grant HarveyGuest: Alex Wiltschko (Founder & CEO, Osmo)Published by: Manique SantosEdited by: Kush Felisilda
Behind every AI response, there's an invisible army of humans who trained it. In this episode, we talk with Casper Elliott from Invisible Technologies - the company that's trained 80% of the world's top AI models. We explore how models actually learn, why data quality matters more than quantity, what enterprises get wrong about AI deployment, and whether AI will really automate everyone's jobs. Casper shares insights from working with frontier labs, reveals the surprising skills that make great AI trainers (hint: League of Legends helps), and explains why the future needs more humans, not fewer.Subscribe to The Neuron newsletter: https://theneuron.aiLearn more about Invisible Technologies: https://invisibletech.ai
Ever wondered who's actually teaching ChatGPT and Claude how to think? Meet Caspar Eliot from Invisible Technologies - the company behind 80% of the world's top AI model training. In this eye-opening conversation, we uncover the massive human workforce behind "artificial" intelligence, why your League of Legends skills might land you an AI job, and the shocking mistakes enterprises make when deploying AI.We discuss:• How AI models really learn (hint: it's not just scraping the internet)• Why data quality beats data quantity every time• The Charlotte Hornets' revolutionary AI scouting system• Whether robots will actually take your job (spoiler: probably not)• The $14.8 billion Scale AI valuation and what it means• Why Mark Andreessen thinks VCs won't be automatedPlus: Caspar reveals the #1 mistake companies make with AI deployment and why "AI-ifying" your current process is doomed to fail.Subscribe to The Neuron newsletter: https://theneuron.aiConnect with Caspar on LinkedIn: https://uk.linkedin.com/in/caspar-eliot-46b9a55aLearn more about Invisible Technologies: https://invisibletech.ai?utm_source=neuron&utm_medium=podcastPlease check out the sponsor of this video, Warp.dev: https://warp.devSo who is Invisible Technologies? In four words: they make AI work. Their platform cleans, labels, and structures company data so it’s ready for AI. It adapts models to each business and adds human expertise when needed — the same approach used to improve models for over 80% of the world’s top AI companies, including Microsoft, AWS, and Cohere.Their successes span industries from supply chain automation for Swiss Gear, to AI-enabled naval simulations with SAIC, and validating NBA draft picks for the Charlotte Hornets. And get this: Invisible has been profitable for over half a decade, was ranked #2 fastest-growing AI company in 2024, and recently raised $100M to advance its platform technology.Check them out at Invisible Technologies: https://invisibletech.ai?utm_source=neuron&utm_medium=podcast
From Adobe Max 2025 in Los Angeles, Corey and Grant sit down with Ely Greenfield, Adobe's Chief Technology Officer, to explore the philosophy behind Adobe's practical AI strategy. Discover why the crowd went wild over AI renaming layers, how Adobe thinks about "additive not subtractive" AI, and where creative tools are heading next. Ely shares Adobe's vision for making AI a creative partner that enhances rather than replaces human artistry, and explains why the best AI features are often the most boring ones.Topics covered include: the Photoshop AI Assistant, Harmonize for instant compositing, auto-masking in Premiere Pro, the Express conversational workflow, and Adobe's unique approach to balancing automation with creative control.Read our Adobe Max coverage:• Adobe Reinvents Creative Suite with AI• Day 2 Keynote Recap• NVIDIA's Beyond-GPUs StrategyThis episode was made possible by our sponsor, Clutch: https://clutch.co/resources/how-smbs-see-ai-crawlers?source=theneuron&utm_medium=referral&utm_campaign=newsletter_10-14-2025Related resources:• Adobe Max 2025 announcements: https://www.theneuron.ai/explainer-articles/adobe-goes-all-in-on-ai-max-2025-unleashes-creative-ai-arsenal-across-every-tool• Day 2 Keynote and Sneaks recap: https://www.theneuron.ai/explainer-articles/adobe-max-day-2-the-storyteller-is-still-king-but-ai-is-their-new-superpower• Check out Adobe Firefly: https://firefly.adobe.com/• Project Graph demo: https://www.youtube.com/live/wQza2t9Qs64?t=10409sMake sure to check out Clutch's new report on AI crawling for SMBS! https://clutch.co/resources/how-smbs-see-ai-crawlers?source=theneuron&utm_medium=referral&utm_campaign=newsletter_10-14-2025Subscribe to The Neuron newsletter for daily AI news: https://theneuron.aiOriginal article: https://www.theneuron.ai/explainer-articles/adobe-goes-all-in-on-ai-max-2025-unleashes-creative-ai-arsenal-across-every-tool
AI is changing what we need from our computers—but does that mean you need an "AI PC"? Corey and Grant sit down with Logan Lawler from Dell Technologies who leads Dell Pro Max AI solutions to decode what matters (and what doesn't) when buying or upgrading your next computer. From CPUs and GPUs to memory, NPUs, and traps to avoid, this episode is your practical roadmap for staying future-ready through the next five years of AI-powered work.Dell Pro Max Workstations: https://www.dell.com/en-us/plcp/lp/dell-pro-max-pcs LM Studio LIVE tutorial: https://www.youtube.com/watch?v=Ai3sBeBdA1Y Kiwix Wikipedia Download: https://en.wikipedia.org/wiki/Kiwix One Trainer: https://github.com/Nerogar/OneTrainerJawset Postshot: https://www.jawset.com/Subscribe to The Neuron newsletter: https://theneuron.aiCheck out the Reshaping Workflows Podcast: https://reshaping-workflows.simplecast.com/
Learn how to use NVIDIA's Nemotron open-source AI models with VP Kari Briski. We cover what Nemotron is, minimum hardware specs, the difference between Nano/Super/Ultra tiers, when to choose local vs cloud AI, and practical deployment patterns for businesses. Perfect for anyone wanting to run powerful AI locally with full control and privacy.Resources mentioned:NVIDIA Nemotron Models: https://www.nvidia.com/en-us/ai-data-science/foundation-models/nemotron/Start prototyping for free: https://build.nvidia.com/explore/discoverSubscribe to The Neuron newsletter: https://theneuron.aiWatch more AI interviews: https://www.youtube.com/@TheNeuronAI
loading
Comments (1)

Bert Fegg

Musk left Open AI because Tesla was creating it's own AI and HE didn't want to cause a conflict of interest??!? According to the board, founders and others close to Open AI, Musk wanted to become CEO and roll the company into Tesla, which no one else wanted. Seeing that Musk has pulled similar moves in the past (how many people know the names of the 2 actual founders of Tesla, whom Musk fired after replacing the former CEO with himself?) I don't blame them. Truth is important & relative.

Apr 29th
Reply