DiscoverHidden Layers: AI and the People Behind It
Hidden Layers: AI and the People Behind It

Hidden Layers: AI and the People Behind It

Author: KUNGFU.AI

Subscribed: 4Played: 113
Share

Description

Hidden Layers: AI and the People Behind It, is a series focused on all things artificial intelligence. Hosted by our Co-Founder and CTO, Ron Green, who uses his 20+ years of AI experience to break down complex topics into digestible, engaging conversations. ‍

If you’re a tech professional, or just looking to better understand the world of AI, you’re in the right place. Each episode will explore cutting-edge technical advances, discuss the art of the possible, and review some of the incredible work being done in the field.

51 Episodes
Reverse
Is the AI bubble narrative itself a bubble? Billions of dollars are flowing into chips, data centers, and frontier models. From the outside, it can look speculative. But from inside the industry, the signal looks very different. In this episode of Hidden Layers, Ron Green is joined by Michael Wharton and Dr. ZZ Si to discuss what it actually feels like to build with AI today. They explore rapid advances in model capabilities, the growing power of coding agents, and why many organizations are still struggling to absorb the productivity gains AI already enables. They also examine the massive capital investment in AI infrastructure and debate what signals would actually indicate the industry has hit a plateau. 00:00 – Is the AI Bubble Narrative Itself a Bubble? 03:00 – Rapid Advances in AI Model Capabilities 05:35 – Coding Agents and the Changing Development Workflow 09:30 – Benchmarks Showing AI Capability Acceleration 16:20 – Verifying AI Outputs and the Limits of Evaluation 18:20 – CAPEX, Chips, and the Dot-Com Bubble Comparison 21:50 – What Would Actually Signal an AI Bubble 26:30 – Why AI May Become a Utility
Are AI coding tools actually replacing programmers, or just changing how software gets built? In this episode of Hidden Layers, Ron Green sits down with Dr. ZZ Si and Michael Wharton to unpack what has shifted with modern coding agents, what has not, and where the hype breaks down. They share concrete examples from their own workflows, including how coding tools have moved from autocomplete to handling larger chunks of work, and why the real bottleneck is no longer writing syntax, but defining intent, architecture, and product direction. The conversation also explores how these tools are reshaping team velocity, why senior engineers tend to get more leverage from AI than junior developers, and the risks of weakening the talent pipeline if companies stop investing in early-career engineers. The episode closes with a candid look at what skills will matter most in an AI-assisted world, how abstraction layers are changing the role of programmers, and whether we may already be near peak computer science graduates. 00:00 – The rise of AI coding tools 03:07 – How workflows are changing 06:27 – Team velocity and delivery speed 08:19 – Product thinking vs. engineering execution 09:46 – Is programming actually dying? 11:41 – What “programming” means now 15:23 – Senior vs. junior developer leverage 16:33 – The developer talent pipeline 18:21 – Ego, identity, and automation 19:08 – Before vs. after: building with AI 22:30 – Debugging and fixing issues with AI 24:42 – Spec-writing and product shaping with AI 26:49 – The future of computer science grads 29:20 – Closing reflections
What if the most powerful AI in your organization isn’t the biggest model you can buy, but the one trained on data only you own? In this episode of Hidden Layers, Ron Green is joined by Dr. ZZ Si and Michael Wharton to break down why domain-specific AI models consistently outperform general-purpose systems in real enterprise environments. They explore how narrowly scoped models deliver higher accuracy, lower costs, better reliability, and stronger governance, especially when built on proprietary data. Through real-world examples spanning finance, industrial systems, healthcare, and document understanding, the conversation tackles when to build custom models, when to rely on APIs, and how to identify AI initiatives that actually make it into production. The takeaway is clear: focus beats scale, and specificity is often the fastest path to durable competitive advantage. Chapters 00:00:00 What Is Domain-Specific AI 00:01:15 General Models vs. Focused Systems 00:02:48 Performance, Cost, and Model Size 00:04:13 Proprietary Data as Advantage 00:07:58 Why AI Fails in Production 00:08:42 Real-World Domain-Specific Examples 00:10:54 How to Decide What to Build 00:14:53 Scale, Accuracy, and Uncertainty 00:18:49 The Spectrum of Domain-Specific AI 00:27:01 What We’d Build Differently Today
2025 was another defining year for artificial intelligence. In this special AI Year in Review episode of Hidden Layers, Ron Green is joined by Emma Pirchalski, Michael Wharton, and Dr. ZZ Si to break down what actually mattered in AI this year. The team recaps the biggest developments from 2025, revisits their predictions from 2024 to see what held up (and what didn’t), and shares honest, experience-driven predictions for 2026. Topics include multimodal models, agents, enterprise adoption, governance gaps, workforce impact, ROI pressure, and where AI is truly headed next. This episode cuts past hype to focus on what leaders, builders, and decision-makers should actually be watching as AI moves from experimentation to execution. Chapters 00:00:00 Welcome and Introduction to 2025 AI Year in Review 00:00:56 Emma's Working Models Podcast Announcement 00:01:48 Top AI Developments of 2025 00:16:29 Reviewing 2025 Predictions 00:25:08 2026 Predictions 00:36:49 Closing Thoughts
Artificial intelligence is shifting from prediction to autonomy—and “agentic AI” is leading the charge. In this episode of Hidden Layers, KUNGFU.AI’s Ron Green, Dr. ZZ Si, and Michael Wharton unpack what it really means for machines to act on their own, what’s hype versus real progress, and how far we are from true artificial general intelligence (AGI). They discuss how coding agents are transforming development workflows, why agentic AI is both overhyped and underutilized, the challenges of scaling reliable autonomy, the connection between AGI, biology, and lifelong learning, and whether new architectures or cognitive inspiration will take us the rest of the way. 00:00 – Intro: From prediction to autonomy 01:30 – What is agentic AI? 05:00 – Coding agents and creative workflows 08:00 – Reliability, risk, and real-world use 12:30 – The agentic hype cycle 16:00 – Why businesses underuse (and overuse) AI 19:00 – Narrow AI and domain-specific intelligence 22:00 – The AGI timeline debate 26:00 – Learning from biology and cognition 33:00 – Lifelong learning and what’s missing today
In this episode of Hidden Layers, Ron is joined by Michael Wharton and Dr. ZZ Si to explore one of the most pressing and puzzling issues in AI: hallucinations. Large language models can tackle advanced topics like medicine, coding, and physics, yet still generate false information with complete confidence. The discussion unpacks why hallucinations happen, whether they’re truly inevitable, and what cutting-edge research says about detecting and reducing them. From OpenAI’s latest paper on the mathematical inevitability of hallucinations to new techniques for real-time detection, the team explores what this means for AI’s reliability in real-world applications.
In this episode of Hidden Layers, we dive into the most important AI developments of the month. We cover OpenAI’s highly anticipated and controversial GPT-5 release, debate where we really are on the AGI timeline, explore groundbreaking new world models like Google’s Genie 3 and Tencent’s Huanyuan Gamecraft, and unpack Meta’s DINO V3 image encoder breakthrough.
In this episode of Hidden Layers, host Ron talks with Dr. Hannah Lu, assistant professor at the University of Texas at Austin and core faculty at the Odin Institute for Computational Engineering and Sciences. Dr. Lu is pioneering the use of AI-powered surrogate models to make complex scientific simulations—like CO₂ absorption in geological formations—faster, more accurate, and more useful for real-world decision-making.They discuss:How surrogate models work and why they’re so powerfulThe challenges of applying AI to physics-based systemsHow digital twins and uncertainty quantification are shaping the future of environmental modelingThe intersection of generative AI, physics constraints, and climate science
In this episode of Hidden Layers: Decoded, Ron Green, Dr. ZZ Si, and Michael Wharton unpack July’s biggest AI developments—from flawed reasoning tests to surprising training breakthroughs.Apple’s “Illusion of Thinking” paper draws sharp critiques—from both humans and language models. Meta revives a forgotten 2019 attention mechanism to reshape scaling laws. Video generation tools from BlackForest Labs and others hit new levels of realism and interactivity. Federal courts weigh in on Anthropic and Meta’s use of copyrighted training data. A one-line tweak in training recurrent models dramatically boosts performance on long sequences. Cloudflare announces it will block AI scrapers by default—though it might be too late.From Transformer alternatives to copyright battles, this episode dives into the fast-moving intersection of AI research, engineering, and regulation.
In this episode of Hidden Layers, Ron Green sits down with Dr. Karl Friston—world-renowned neuroscientist and originator of the Free Energy Principle—and Dan Mapes, founder of Verses AI and the Spatial Web Foundation. Together, they explore how neuroscience is beginning to reshape artificial intelligence.They break down complex but powerful ideas like active inference, biologically plausible AI, and collective intelligence. You'll hear how concepts from brain science are influencing next-gen AI architectures and what the future might hold beyond large language models.From the limitations of backpropagation to the promise of decentralized, embodied, and domain-specific models, this is a deep dive into the future of intelligent systems—and the science behind them.
In this episode of Hidden Layers: Decoded, Ron Green, Dr. ZZ Si, and Michael Wharton explore the latest AI breakthroughs, including Sakana AI’s biologically-inspired “Continuous Thought Machines,” the self-taught coding model Absolute Zero, and Salesforce’s unified vision-language system BLIP3-o. They discuss the growing importance of reinforcement learning in a data-constrained world, Google’s diffusion-based language and video models, and Anthropic’s industry-leading interpretability efforts. The team also covers Apple’s AI missteps and a new study revealing why single, well-structured prompts outperform long chat sessions. Throughout, they reflect on alignment risks, emergent reasoning, and the changing shape of model development and training strategy.
In this episode of Hidden Layers, Ron Green sits down with Dr. Risto Miikkulainen — Vice President of AI Research at Cognizant Advanced AI Labs and Professor of Computer Science at UT Austin — to explore the fascinating world of evolutionary computation. They dive deep into the differences between supervised learning, reinforcement learning, and evolutionary techniques, and why evolutionary approaches offer unique advantages for creativity, scalability, and innovation in AI. Dr. Miikkulainen shares real-world examples of unexpected discoveries, from cyber agriculture breakthroughs to designing new AI architectures. They also discuss the future of multi-agent systems, surrogate modeling, and how evolutionary computation could help us better understand the emergence of intelligence and language. Plus, Dr. Miikkulainen previews his upcoming book Neural Evolution: Harnessing Creativity in AI Model Design.
In this episode of Hidden Layers, Ron Green talks with Dr. ZZ Si, Michael Wharton, and Reed Coke about recent AI developments. They cover Anthropic’s work on Claude 3.5 and model interpretability, OpenAI’s GPT-4 image generation and its underlying architecture, and a new approach to latent reasoning from the Max Planck Institute. They also discuss synthetic data in light of NVIDIA’s acquisition of Gretel AI and reflect on the delayed rollout of Apple Intelligence. The conversation explores what these advances reveal about how AI models reason, behave, and can (or can’t) be controlled.
In this episode of Hidden Layers, host Ron Green sits down with Dr. Anna Ivanova, Assistant Professor of Psychology at Georgia Tech and Director of the Language, Intelligence, and Thought Lab. Dr. Ivanova's research explores the intricate relationship between language, cognition, and artificial intelligence, shedding light on how the brain processes language and how large language models (LLMs) compare to human thought.
In this episode of Hidden Layers: Decoded, Ron Green, Dr. ZZ Si, and Michael Wharton explore the latest AI breakthroughs, including DeepSeek’s R1 model, Meta’s work on intuitive physics, and Stanford’s S1 model. They discuss the rise of cost-effective reinforcement learning, diffusion-based language models, and DeepMind’s advances in geometry-solving AI. The team also dives into AI-driven biology with Evo2 and the emergence of civilizations in a Minecraft simulation. Throughout, they reflect on the future of AI, from domain-specific models to the impact of world models on business and science.
In this episode of Hidden Layers, host Ron Green speaks with Dr. Peter Stone, a leading expert in AI and robotics, about the evolution of autonomous systems. They explore multi-agent AI, RoboCup’s ambitious goal of creating robot soccer players that can beat humans by 2050, and the ongoing hardware vs. software challenge in robotics. Dr. Stone shares insights on the power of large language models, the rise of agentic AI, and the importance of balancing neural networks with traditional planning systems. They also discuss AI ethics, alignment, and what the next decade could bring for intelligent agents and general-purpose service robots.
In this episode of Hidden Layers: Decoded, we dive into cutting-edge AI advancements over the last month. Explore Agentic AI and innovations like DeepMind Genie 2 and Cosmos Text2World, transforming virtual environments. Discover breakthroughs like RStar Math and DeepSeek v3, delivering efficiency and performance in reasoning and problem-solving. We also discuss test-time scaling, coding agents, and the drama behind the NeurIPS Best Paper Award.
In this special 2024 AI Year in Review, Ron is joined by AI experts ZZ Si (Co-Founder & Distinguished Engineer), Emma Pirchalski (AI Strategist), and Michael Wharton (VP of Engineering) to reflect on the most important AI moments of 2024. They come together to discuss the defining stories, key breakthroughs, and major challenges that shaped AI in 2024. Ron leads the conversation, drawing out their perspectives on the year's most impactful developments, unfiltered reflections, bold insights, and forward-looking predictions for the future of AI.
In this episode of Hidden Layers: Decoded, Ron Green teams up with KUNGFU.AI's ZZ Si and Michael Wharton to explore groundbreaking advancements in artificial intelligence. From DeepMind’s AlphaFold 3 revolutionizing computational biology to debates on the limits of scaling AI models, the conversation covers all the latest AI news from the last month. Highlights include robotic surgery advancements powered by Johns Hopkins’ AI, NVIDIA’s LLaMA-Mesh for 3D mesh generation, and the rise of generative AI in gaming with the groundbreaking Oasis AI game.
In this episode of "Hidden Layers," Ron Green dives into the transformative impact of AI on software development with Peter Wang, Chief AI and Innovation Officer at Anaconda. They discuss the rise of AI coding assistants, tools like GitHub Copilot and Cursor, and their potential to change how developers work. From coding support to the future of AI-native languages, they explore whether AI could replace programmers or simply elevate them to a new level. Peter shares insights from his pioneering work in the Python data science community and the broader implications of AI in fields like edge computing, data privacy, and open-source development.
loading
Comments 
loading