DiscoverThe Neuron: AI Explained
The Neuron: AI Explained

The Neuron: AI Explained

Author: The Neuron

Subscribed: 533Played: 3,330
Share

Description

The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube.

Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
37 Episodes
Reverse
Steve Brown's house burned down in a wildfire—and accidentally saved his life. When doctors missed his aggressive blood cancer for over a year, Steve built a swarm of AI agents that diagnosed it in minutes and helped design his treatment. Now he's turning that breakthrough into CureWise, a precision oncology platform helping cancer patients become better advocates. We explore agentic medicine, AI safety in healthcare, and how swarms of specialized AI agents are changing cancer care from diagnosis to treatment selection.🔗 Get on the CureWise waitlist: https://curewise.com📧 Subscribe to The Neuron newsletter: https://theneuron.ai
Everyone's talking about the AI datacenter boom right now. Billion dollar deals here, hundred billion dollar deals there. Well, why do data centers matter? It turns out, AI inference (actually calling the AI and running it) is the hidden bottleneck slowing down every AI application you use (and new stuff yet to be released). In this episode, Kwasi Ankomah from SambaNova Systems explains why running AI models efficiently matters more than you think, how their revolutionary chip architecture delivers 700+ tokens per second, and why AI agents are about to make this problem 10x worse.💡 This episode is sponsored by Gladia's Solaria - the speech-to-text API built for real-world voice AI. With sub-270ms latency, 100+ languages supported, and 94% accuracy even in noisy environments, it's the backbone powering voice agents that actually work. Learn more at gladia.io/solaria🔗 Key Links:• SambaNova Cloud: https://cloud.sambanova.ai• Check out Solaria speech to text API: https://www.gladia.io/solaria• Subscribe to The Neuron newsletter: https://theneuron.ai🎯 What You'll Learn:• Why inference speed matters more than model size• How SambaNova runs massive models on 90% less power• Why AI agents use 10-20x more tokens• The best open source models right now• What to watch for in AI infrastructure➤ CHAPTERSTimecode - Chapter Title0:00 - Intro2:14 - What is AI Inference?3:19 - Why Inference is the Real Challenge9:18 - A message from our sponsor, Gladia Solaria10:16 - The 95% ROI Problem Discussion13:47 - SambaNova's Revolutionary Chip Architecture15:19 - Running DeepSeek's 670B Parameter Models18:11 - Developer Experience & Platform21:26 - AI Agents and the Token Explosion24:33 - Model Swapping and Cost Optimization31:30 - Energy Efficiency 10kW vs 100kW36:13 - Future of AI Models Bigger vs Smaller39:24 - Best Open Source Models Right Now46:01 - AI Infrastructure Next 12 Months47:09 - Agents as Infrastructure50:28 - Human-in-the-Loop and Trust52:55 - Closing and ResourcesArticle Written by: Grant HarveyHosted by: Corey Noles and Grant HarveyGuest: Kwasi AnkomahPublished by: Manique SantosEdited by: Adrian Vallinan
In this episode, we're joined by Ahmed El-Kishky, research lead at OpenAI, to discuss their historic victory at the International Collegiate Programming Contest (ICPC) where their AI system solved all 12 problems, beating every human team in the world finals. We dive into how they combined GPT-5 with experimental reasoning models, the dramatic last-minute solve, and what this means for the future of programming and AI-assisted science. Ahmed shares behind-the-scenes stories from Azerbaijan, explains how AI learns to test its own code, and discusses OpenAI's path from this win to automating scientific discovery over months and years.Subscribe to The Neuron: https://theneuron.aiWisprFlow: https://wisprflow.ai/neuronOpenAI: https://openai.com
Microsoft AI CEO Mustafa Suleyman (co-founder of DeepMind) joins The Neuron to discuss his provocative essay on "Seemingly Conscious AI" and why machines that mimic consciousness pose unprecedented risks - even when they're not actually alive. We explore how 700 million people are already using AI as life coaches, Microsoft's massive $208B revenue strategy for AI, and exclusive features like Copilot Vision that can see everything you see in real-time.Key topics:• Why AI consciousness is an illusion - and why that's dangerous • Microsoft's 2 gigawatt datacenter expansion (2.5x Seattle's power usage)• MAI-1 Preview breaking into the top 10 models globally• The future of AI browsers and autonomous agents• Why granting AI rights could threaten humanitySubscribe to The Neuron newsletter (580,000+ readers): https://theneuron.aiResources mentioned:• Mustafa's essay "Seemingly Conscious AI Is Coming" https://mustafa-suleyman.ai/seemingly...• Try Copilot Vision: https://copilot.microsoft.com• Microsoft Edge AI features: https://www.microsoft.com/en-us/edge• MAI-1 Preview models: https://microsoft.ai/news/two-new-in-...Special thanks to today's sponsor, Wispr Flow: https://wisprflow.ai/neuron
Illia Polosukhin, co-author of Attention Is All You Need and co-founder of NEAR Protocol, believes today's centralized AI ecosystem is broken. In this episode, he explains why User-Owned AI is the path forward — making systems private, verifiable, and aligned with users rather than corporations. We explore confidential computing, interoperable AI agents, and what a more sustainable AI future might really look like.Subscribe to The Neuron newsletter: https://www.theneurondaily.com
Thomson Reuters just launched Deep Research—an AI system that doesn't just search legal databases, but plans and strategizes like an experienced attorney. In this episode, we explore how one of the world's largest legal research companies is using AI agents to transform how lawyers work, the challenges of building AI for high-stakes legal decisions, and what this means for the future of knowledge work. CTO Joel Hron shares insights from testing with 1,200+ customers, tackling hallucination risks in legal settings, and building professional-grade AI systems.Resources mentioned: Thomson Reuters Deep Research: https://www.prnewswire.com/news-releases/thomson-reuters-launches-cocounsel-legal-transforming-legal-work-with-agentic-ai-and-deep-research-302521761.html Westlaw & KeyCite: https://legal.thomsonreuters.com/en/products/westlaw/keycite Claude Code for development: https://www.anthropic.com/claude-code LinkedIn: Joel HronThomson Reuters Medium blog: https://medium.com/tr-labs-ml-engineering-blog Subscribe to The Neuron newsletter: https://theneuron.ai
Today we go deeper on Google's AI stack with Logan Kilpatrick: what AI Studio is great at, how it fits with Firebase/Colab/Gemini CLI/Jules, and where "thinking" models make sense. We cover real-world workflows—from game prototyping and screen-share assistance to legal/privacy basics and on-device micro-apps. Logan shares his insights on vibe coding, the future of AI development, and Google's open-source strategy with Gemma models.Resources mentioned:Google AI Studio: https://aistudio.google.com/Gemini CLI: https://github.com/google-gemini/gemini-cliKaggle Game Arena: https://www.kaggle.com/competitionsGoogle Firebase: https://firebase.google.com/Gemma models: https://ai.google.dev/gemmaSubscribe to The Neuron newsletter: https://theneuron.ai
What does it take to steer a 3,500-person company into the age of generative AI? ZoomInfo founder and CEO Henry Schuck joins us to unpack the company's journey from data powerhouse to AI-first GTM platform, the cultural shifts that enabled it, and the hard-won lessons any leader can borrow. We explore how they reduced teams from 26 to 2 people using AI agents, why 2/3 of employees now use AI daily, and the critical role of data infrastructure in AI success.Subscribe to The Neuron newsletter: https://theneuron.aiLearn more about ZoomInfo: https://www.zoominfo.com
What does "AI governance" really entail, and why does it matter right now? Credo AI founder Navrina Singh joins The Neuron to unpack risk buckets, Model Trust Scores, and the regulatory zig-zag between the EU and the U.S.—so you can move fast without crashing the car. We dive into open source safety, agent governance, and test OpenAI's brand new open source model live.Learn more about AI governance: https://credo.ai/resourcesSubscribe to The Neuron newsletter: https://theneuron.ai
A lot of people aren't sure whether they should just chat with an AI model, craft a structured prompt, spin up a project, or unleash a full-blown agent. In this episode, we break down the differences between these approaches and share a practical decision-making framework. We'll show how simple prompts excel for quick, isolated tasks, why structured prompts improve clarity and focus, when a project (workflow) is better for predictable, repeatable processes, and where autonomous agents shine for dynamic, open-ended problems. Along the way we'll demo real examples, share tips for avoiding unnecessary complexity, and help listeners decide which tool fits their use case.Subscribe to The Neuron newsletter: https://theneuron.ai
In this hands-on episode, Corey and Grant attempt to build three different AI apps in one hour using Google AI Studio - with zero coding experience required. They create an Inbox Zero email organizer, a meme generator that roasts their photos, and a spontaneous adventure planner with interactive maps. Watch as they navigate errors, discover workarounds, and prove that anyone can build functional AI apps without being a developer.Subscribe to The Neuron newsletter: https://theneuron.aiOriginal article: https://www.theneuron.ai/newsletter/googles-ai-makes-you-appsRead more: https://www.theneuron.ai/explainer-articles/how-to-build-an-ai-agent-part-one-testing-googles-firebase-studio-ai-agent-builderGoogle AI Studio:⁠ https://aistudio.google.com⁠NoCodeMBA Tutorial: https://youtu.be/ANth52yyr9U?si=L9iT1-eYgB8nOfrgAlternative builders mentioned:- Lovable:⁠ https://lovable.dev⁠- Claude Artifacts:⁠ https://claude.ai⁠- V0 by Vercel:⁠ https://v0.dev⁠- Bolt.new:⁠ https://bolt.new⁠
OpenAI just dropped ChatGPT Agent, and we test it LIVE for the first time. Watch as we put this "true" AI agent through its paces - from finding the perfect Gibson Les Paul to building competitive intelligence reports. We explore what makes this different from Zapier-style automation, demonstrate real-world use cases, and discuss why this might be the beginning of America's first super app. Plus: can it actually convince Corey's wife he needs a new guitar?Subscribe to The Neuron newsletter: https://theneuron.aiOriginal article: https://www.theneuron.ai/newsletter/openais-new-agent-is-hereRead More: https://www.theneuron.ai/explainer-articles/how-to-build-your-own-ai-agent-without-being-a-pro-coderhttps://www.theneuron.ai/newsletter/operator-book-me-some-clientsChatGPT Agent: https://openai.com/index/introducing-chatgpt-agent/ N8N: https://n8n.io/
Headlines scream that AI is "breaking the classroom," but is the story that simple? In this episode we explore the real cracks in today's education system, how AI sometimes widens them, and—more importantly—how the same technology could personalize learning, free teachers to teach, and shift schools from rote memorization to true mastery. We discuss the UCLA "CheatGPT" controversy, MIT's brain study, Alpha School's 2-hour learning model, and OpenAI's new $10M teacher training initiative.Subscribe to The Neuron newsletter: https://theneuron.aiWTF is going on with AI and education: https://www.theneuron.ai/explainer-articles/wtf-is-going-on-with-ai-and-educationOne Useful Thing (Ethan Mollick) Post-apocalyptic education: https://www.oneusefulthing.org/p/post-apocalyptic-education MIT study: https://www.media.mit.edu/publications/your-brain-on-chatgpt/ Ethan Mollick again, “Against brain damage”: https://www.oneusefulthing.org/p/against-brain-damage  OpenAI working with teachers union: https://openai.com/global-affairs/aft/ Make it Stick book: https://www.makeitstick.com/
Will AI turbocharge our output—or erode our standards in the rush to automate? In this episode, strategist Andreas Welsch (ex-SAP, author of The AI Leadership Handbook) joins Corey Noles and Grant Harvey to weigh the promise of higher productivity against the peril of slipping quality. Expect plain-language insights on agentic AI, governance that scales, and the human skills and metrics that reveal whether AI is lifting the bar—or lowering it.Guest: Andreas Welsch LinkedIn: https://www.linkedin.com/in/andreasmwelschAI Leadership Handbook: https://www.aileadershiphandbook.com/order What's the BUZZ? Podcast: https://www.intelligence-briefing.com/podcast The AI MEMO Newsletter: https://www.intelligence-briefing.com/newsletterWork with Andreas (AI strategy, workshops, training): https://www.intelligence-briefing.com OWASP Top 10 LLMs: https://owasp.org/www-project-top-10-for-large-language-model-applications/There are no new ideas in AI, only new datasets: https://blog.jxmo.io/p/there-are-no-new-ideas-in-ai-only N8N to start automating your own tasks (not a promo; this is just the best tool for the job): https://n8n.io/ or https://n8n.io/workflows for template workflows to try. The Neuron Newsletter: https://www.theneuron.ai
In Ep 3 we explore DeepSeek's open-source R-series models that claim GPT-4-level performance at a fraction of the cost. We unpack whether you can realistically run DeepSeek on a laptop, where it beats (and lags) OpenAI, and the serious security implications of using Chinese AI services. Listeners will learn the economics, hardware realities, and safe alternatives for using these powerful open-source models.How to pick the best AI for what you actually need: https://www.theneuron.ai/newsletter/how-to-pick-the-best-ai-model-for-what-you-actually-needArtificial Analysis to compare top AI models: https://artificialanalysis.ai/ Previous coverage of DeepSeek: https://www.theneuron.ai/newsletter/deepseek-returns https://www.theneuron.ai/newsletter/10-wild-deepseek-demoshttps://www.theneuron.ai/explainer-articles/deepseek-r2-could-crush-ai-economics-with-97-lower-costs-than-gpt-4 U.S. Military allegations against DeepSeek: https://www.reuters.com/world/china/deepseek-aids-chinas-military-evaded-export-controls-us-official-says-2025-06-23/ ChatGPT data privacy concerns: https://www.theneuron.ai/explainer-articles/your-chatgpt-logs-are-no-longer-private-and-everyones-freaking-out OpenAI’s response to NYT lawsuit demands: https://openai.com/index/response-to-nyt-data-demands/ How to run Open source models: Go to Hugging Face for the models: https://huggingface.co/ Use Ollama or LM Studio (our recommendation) to run the model locally: https://ollama.com/ https://lmstudio.ai/
In Ep 2 we ask: "Panic or Progress? Reading Between the Lines of AI Safety Tests." We unpack the recent Claude Opus 4 "blackmail" test result, OpenAI's new transparency pledge, and why safety evaluations sometimes sound scarier than they are. Listeners will leave with a clear framework for interpreting headline-grabbing safety reports—and practical advice on when to worry, when to wait, and how to separate red flags from red herrings.
In this special hands-on episode, Corey Noles and Grant Harvey dive into OpenAI's Sora 2 - the AI video platform that's part TikTok, part meme generator, and 100% chaos. Watch as they navigate the new social media-style interface, create ridiculous videos featuring Sam Altman at a Berlin techno rave filled with clowns, and discover why Sam has become the "Tom from MySpace" of AI-generated content.The hosts explore Sora 2's key features including the viral "cameo" system that lets you loan your likeness to other creators, the remix functionality, and the surprisingly robust prompt editing capabilities. They demonstrate the platform's strengths (incredibly fast generation, social features, creative possibilities) and weaknesses (no timeline editor for scrubbing through footage, occasional voice mismatches, server delays during peak times).Key takeaways include practical prompting tips for better results, how to set up and optimize your cameo preferences, and why being descriptive in your prompts makes all the difference. Grant and Corey also discuss the broader implications: Is this OpenAI's answer to TikTok? How does this fit into the AI landscape where every major player now has a social platform? And most importantly - why is everyone making Sam Altman breakdance?Whether you're AI-curious or a seasoned prompt engineer, you'll learn how to navigate Sora 2's interface, avoid common pitfalls, and maybe even create your own viral AI video. Plus, find out why Corey's "realistic physique was not okay on Sora" and had to optimize his cameo settings with ChatGPT's help.➤ CHAPTERSTimecode - Chapter Title0:00 - Introduction: What is Sora 21:03 - Sam Altman is the Tom from MySpace of AI1:57 - Mobile App Tour & Social Features3:42 - Remix Feature: Editing Sam's Bedtime4:12 - The Secret to Better Prompting6:40 - Profile Features & Your Drafts8:44 - Understanding Cameos10:40 - How to Set Up Your Cameo13:00 - Optimizing Cameo Preferences with ChatGPT15:05 - Live Demo of Creating A Video18:25 - Using the Edit Feature20:09 - First Video Results23:32 - Fixing a Bad Video26:49 - Finding & Following People30:33 - Exploring Trending Videos32:50 - Why OpenAI Built a Social Platform35:34 - Training Data Implications38:00 - Voice Input and Pro Prompting Tips40:02 - The First AI-Native Social Media45:43 - Final ThoughtResources: - Sora 2 launch: https://openai.com/index/sora-2/- Download the app https://apps.apple.com/us/app/sora-by-openai/id6744034028- Sora app on the web: https://sora.chatgpt.com/exploreP.S: First comment gets an invite code. Grant has 4 atm :)
Will AI really erase half of all white-collar jobs, as Anthropic CEO Dario Amodei warns? We unpack the numbers, the hype, and the hidden opportunities, then hand the mic to Microsoft's Alexia Cambon for fresh research on how to thrive in an AI-saturated workday. Listeners will learn how roles are shifting, which skills stay scarce, and concrete moves to keep their careers future-proof.
After a long wait, Apple is finally in the game with AI. They’re launching Apple Intelligence with MacOS Sequoia and iOS 18. Pete breaks down some top features and how our devices will change moving forward. Transcripts: ⁠https://www.theneuron.ai/podcast⁠ Subscribe to the best newsletter on AI: ⁠https://theneurondaily.com⁠ Listen to The Neuron: https://lnk.to/theneuron Watch The Neuron on YouTube: ⁠https://youtube.com/@theneuronai
The US government is opening up antitrust inquiries into the likes of Nvidia, OpenAI and Microsoft. Who's leading the charge, and what could they be looking at? Transcripts: ⁠https://www.theneuron.ai/podcast⁠ Subscribe to the best newsletter on AI: ⁠https://theneurondaily.com ⁠Listen to The Neuron: https://lnk.to/theneuron Watch The Neuron on YouTube: ⁠https://youtube.com/@theneuronai
loading
Comments (1)

Bert Fegg

Musk left Open AI because Tesla was creating it's own AI and HE didn't want to cause a conflict of interest??!? According to the board, founders and others close to Open AI, Musk wanted to become CEO and roll the company into Tesla, which no one else wanted. Seeing that Musk has pulled similar moves in the past (how many people know the names of the 2 actual founders of Tesla, whom Musk fired after replacing the former CEO with himself?) I don't blame them. Truth is important & relative.

Apr 29th
Reply
loading