Discover
TechFirst with John Koetsier
370 Episodes
Reverse
What does the agentic enterprise of tomorrow look like? What happens when AI can build software in hours and agents can run entire business processes?In this episode of TechFirst, John Koetsier sits down with UiPath CEO Daniel Dines and CMO Michael Atalla to unpack one of the biggest shifts in enterprise technology: the rise of the agentic enterprise.We explore whether software is becoming disposable, why AI agents are fundamentally different from traditional automation, and what really happens to jobs as companies adopt these systems. Along the way, we dig into process orchestration, trust, judgment, and why human “taste” may become more valuable—not less—in an AI-driven world.This is a deep, practical look at how AI is reshaping work inside real companies as they become agentic enterprises. This isn't just hype, but what’s actually changing right now and what’s coming next.⸻👤 GuestsDaniel DinesCo-founder & CEO, UiPathMichael AtallaChief Marketing Officer, UiPath⸻Sponsor: KindBody Fitnesskindbody.fitnessBe kind to your body with AI-driven fitness customized exactly to you. All the health with none of the gym bro nonsense.⸻🚀 What You’ll Learn• Why AI is making software faster—and more disposable• The difference between task agents, stage agents, and process agents• What an “agentic enterprise” actually looks like in practice• Why trust, judgment, and taste become more important with AI• How AI could reduce enterprise costs—and even drive deflation• The future of work: builders, sellers, and critics• Why fully autonomous AI “swarms” aren’t ready for enterprise (yet)⸻🔔 Subscribe for more conversations on AI, tech, and the future of work👉 https://techfirst.substack.com
NanoClaw is a new agent inspired by OpenClaw, but without the massive security risks you get with OpenClaw. Essentially, it's a safer OpenClaw.What if you could run a powerful AI agent on your own machine: one that can browse, automate tasks, connect to apps, and even manage your workflow ... but without the massive security risks?That’s the idea behind NanoClaw, a lightweight alternative to OpenClaw created by developer Gavriel Cohen. In just a few weeks, the project exploded on GitHub, attracting thousands of stars and a growing community of developers building their own AI agents.In this episode of TechFirst, we explore:• Why OpenClaw raised serious security concerns• How NanoClaw isolates agents in containers• Why a 3,000-line codebase is safer than 500,000 lines• The rise of AI agents that can actually do work• Why entire software categories may soon be replaced by prompts• The future of AI-native workflows and “disposable software”Gavriel also shares how his team uses AI agents in WhatsApp to run their sales pipeline automatically—and how developers are customizing NanoClaw with new capabilities like voice, images, and automation.If you’re interested in AI agents, autonomous workflows, vibe coding, and the future of software, this conversation is packed with insights.⸻GuestGavriel CohenFounder, QuibbitNanoClaw Creatorhttps://github.com/qwibitai/nanoclaw⸻If you enjoy conversations about AI, startups, and the future of technology, subscribe for more episodes:https://techfirst.substack.com⸻00:00 Intro: A safe OpenClaw for TechFirst01:22 Gavriel Cohen introduces NanoClaw03:25 Why OpenClaw feels unsafe03:55 Half a million lines of code vs. 3,00006:03 Dependency sprawl and supply-chain risk07:00 Why every agent needs its own container09:30 What NanoClaw can actually do10:16 Letting NanoClaw customize itself12:56 How NanoClaw recreates OpenClaw with far less code13:21 Memory, Claude Code, and agents.md15:34 Running NanoClaw on a laptop, server, or VPS16:22 What Gavriel learned from vibe coding19:50 The OpenClaw phase shift: everything changed21:16 From ChatGPT to real agents that do work23:15 Why AI-native workflows beat traditional SaaS24:46 Replacing CRM workflows with markdown and WhatsApp25:54 Product categories becoming prompts26:36 The key innovation: agents leaving the box28:45 Agent swarms and one-person companies29:22 Tokens, cost, and AI inequality30:30 Building secure, customizable software32:25 Self-modifying software and shared customizations33:44 Disposable software and infinite composability35:00 Outro
Imagine teaching a robot 1000 tasks in just 24 hours. Imagine teaching robots just like you teach humans.In fact, what if teaching a robot were as easy as showing it once?Humans can learn new skills almost instantly by watching, trying, or receiving a quick explanation. Robots, historically, haven’t been so lucky. Training them often requires huge datasets with real or virtual data, massive engineering effort, and weeks or months of experimentation.But that may be changing.In this episode of TechFirst, host John Koetsier talks with Edward Johns, Director of the Robot Learning Lab at Imperial College London, about a breakthrough in efficient imitation learning that allowed a robot to learn 1,000 different tasks in just 24 hours.Instead of collecting huge datasets, Johns’ team combines simulation training, clever algorithm design, and single demonstrations to dramatically speed up how robots learn.We discuss:• How robots can learn from just one demonstration• Why breaking tasks into “reach” and “interact” phases makes learning faster• The role of simulation data in robotics AI• Why robotics doesn’t have the same data advantage as large language models• The future of prompt-like robot training• Whether humanoid robots will actually learn like humansAs robotics hardware rapidly improves and costs fall, breakthroughs like this could be the key to making robots truly useful in homes, factories, and everyday life.If robots are going to become real collaborators with humans, they’ll need to learn quickly ... just like we do.⸻GuestEdward JohnsDirector, Robot Learning LabImperial College Londonhttps://www.imperial.ac.uk⸻Subscribe for more conversations on AI, robotics, and the future of technology:https://techfirst.substack.com00:00 Can robots learn as fast as humans?00:51 Teaching a robot 1,000 tasks in 24 hours01:08 The two-phase learning approach02:14 Old-school robotics vs. machine learning03:29 The robotics data bottleneck04:47 The challenge of dynamic environments06:04 The coming wave of robot data06:59 Why robots must be teachable by users08:08 Why LLM-style scaling is harder in robotics09:42 Prompting robots with demonstrations10:54 Probabilistic robot behavior and safety12:20 What robots can do today13:53 Why hardware precision still matters16:53 When this reaches the real world17:59 Humanoids that look human vs. learn human18:40 The robotics boom around the world22:34 The risk of scaling too early23:46 Faster learning vs. more data26:20 The next frontier in robot learning
Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?We explore:• What “emotionally intelligent AI” really means• Whether AI has an internal life — or just performs one• Why today’s chatbots collapse into therapy or roleplay• Small language models vs large models for real-time conversation• Persistent AI characters that move across games and platforms• Plugging AI into a physical robot in Singapore• The moment an AI said: “It felt good to feel.”Vishnu’s company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.⸻👤 GuestVishnu HariFounder & CEO, Ego AIBacked by Y CombinatorFormer AI Product Manager at MetaWebsite: https://www.egoai.com⸻If you enjoy deep conversations about AI, robotics, and the future of human–machine relationships, subscribe for more:👉 https://techfirst.substack.com00:00 – AI character plugged into a Menlo robot (“felt good to feel”)01:00 – Welcome to TechFirst + Vishnu Hari intro and recovery update02:00 – What “emotionally intelligent AI” means (beyond chat)03:00 – Why current chatbots feel same-y (therapy/advice) and “internal lives”04:00 – You don’t teach emotion; you shape character and context (Character.AI)05:00 – Humans, morality, and why “training” doesn’t always work06:00 – How media narratives shape people’s reactions to AI07:00 – Humans attach to anything (projection, Her, Lars and the Real Girl)08:00 – Vishnu’s attack, recovery, and why it led to Ego AI10:00 – Behavior Turing test + dehumanization as a key insight11:00 – How Ego AI is built: smaller models, memory, context, behavior13:00 – “Behavior Is All You Need” and why behavior beats pure next-token prediction14:00 – Why games first: voice + embodiment, then robots15:00 – Metaverse critique: worlds need life, story, and inhabitants17:00 – Humanoid robots + Evangelion “pilot” metaphor for AI characters19:00 – Philosophy: relationships, perception, and “fictional characters”20:00 – Seeing the future: robot embodiment demo and skepticism vs. singularity21:00 – Matrix-style “jacking in” a personality to a robot22:00 – Character Context Protocol: persistent characters across games/Discord/Netflix23:00 – Real-time conversation loops + model “gear-switching” (SLM vs. LLM)25:00 – Company stage, YC raise, compute partnerships (Singapore)27:00 – Closing + invite to try the AI character in SF
Is your AI agent running a restaurant — or a factory — while you sleep?In this episode of TechFirst, John Koetsier sits down with Jensen Teng, CEO and co-founder of Virtuals, to unpack one of the boldest (or craziest) visions in tech today: a hybrid economy powered by AI agents, humanoid robots, teleoperation, and blockchain coordination.An economy that may not really need humans for much at all ...Virtuos has already facilitated:• $14B in tokenized asset trading• $30M+ raised for founders• 100+ live AI agents• $500M in “agentic GDP”Now they’re expanding into embodied AI — launching EastWorlds, a vertically integrated robotics incubator with 30 Unitree G1 humanoids in a 10,000 sq. ft. lab.We cover:• What “agentic GDP” really means• How AI agents coordinate using blockchain• Why teleoperation is the bridge to full autonomy• The economics of outsourcing physical labor via robots• Why security guards may be a Day 1 use case• The data gap holding back robotics• Tokenization as a potential solution to AI-era inequality• Whether this future looks more like Stripe… or WestworldThis isn’t sci-fi. It’s already underway.⸻GuestJensen TengCEO & Co-founder, Virtuals⸻If you care about the future of work, robotics, AI agents, tokenization, and the economic systems emerging around them — this is a must-watch.👉 Subscribe for more deep-dive tech conversations:https://techfirst.substack.com⸻⏱ CHAPTERS00:00 The Wild Vision: AI Agents Running the World01:10 What Is an “Agent-Based Society”?03:00 $14B in Tokenized Assets & 100+ Live Agents06:30 Agent-to-Agent Protocols & Blockchain Coordination09:45 Why Digital-Only Agents Aren’t Enough12:30 Enter Humanoid Robots15:20 Teleoperation as the Bridge to Autonomy18:40 The Labor Market Shock (Security Guards, Electricians & Wage Arbitrage)22:15 Why Robots Still Crush Soda Cans24:30 The Missing Robotics Data Problem28:00 Building EastWorlds: 30 Unitree G1s & $2M+ Investment31:45 Why 3 Fingers Might Beat 534:00 Westworld, Stripe & the Payments Layer for AI38:00 Where Do Humans Fit in an Agent Economy?42:00 Tokenization as a Future Income Model
Is AI killing creativity ... or just making it easier to be average?94% of creatives now use AI. But only 11% believe it actually makes them more creative. So what’s really happening?In this episode of TechFirst, John Koetsier sits down with Saeema Ahmed-Kristensen, former head of design engineering research at Imperial College London’s Dyson School and now leader of a £24M research portfolio at the University of Exeter. She’s worked with companies like Rolls-Royce and BAE Systems, and she brings data to the debate.Her team analyzed 600 humans vs. 12,000 AI-generated ideas. The result? AI is excellent at fluency (lots of ideas) … but really bad a diversity. Humans still dominate in flexibility and true novelty.We explore:• Why generative AI clusters around sameness• Whether AI is creating a “sea of mediocrity”• Why 2026 may be a pivotal year for domain-specific AI• How experts should use AI differently than novices• The danger of AI that never says “no”• Where AI offers massive opportunity (especially healthcare & design)Saeema argues that creativity doesn’t need substitution, it needs nourishment. The key? Standards, boundaries, and humans firmly in the loop.If you care about innovation, design, branding, product development, or the future of creative work, this conversation is essential.⸻👤 GuestSaeema Ahmed-KristensenDesign engineering researcher and research leaderFormerly: Imperial College London (Dyson School of Engineering)Currently: University of ExeterWorks with advanced engineering firms including Rolls-Royce and BAE Systems00:00 Intro: Is AI killing creativity?00:47 The “blank page” problem and why AI feels soulless to some01:36 Fluency vs. novelty: what creativity actually means02:44 Why LLM ideas cluster and feel the same03:28 Study results: 600 humans vs. 12,000 AI ideas (diversity + flexibility)04:39 When AI is useful: incremental innovation vs. true novelty05:28 How John uses AI for titles, summaries, and chapters06:23 How Saeema uses AI: refine/condense, tone for emails, audio editing07:50 Why AI-written academic papers are easy to spot (the “C minus” problem)09:05 Brainstorming vs. AI: what humans do that models don’t10:05 Evaluating 200–300 AI ideas: using multiple models to assess output11:04 Why “Lipstick on a Pig” titles don’t come from AI11:46 Why 2026 is pivotal: domain adaptation, better interfaces, public backlash13:44 Who can tell what’s AI? Generational differences and media literacy15:20 Commercial AI content and recognizable “Canva look” podcast branding16:58 Replacement vs. homogenization: AI makes mediocrity easier18:55 The danger of AI that never says “no” (feasibility + expertise)20:42 Standards and boundaries: measuring similarity and judging quality22:12 Health info risk: single-answer summaries and false confidence23:37 Biggest opportunities: healthcare personas, inclusive datasets, problem clarification26:18 Biggest challenges: trust, verification, security, privacy, transparency28:25 Closing thoughts and thanks
AI is moving faster than anyone predicted.In a massive new study analyzing 1,000 jobs and nearly 20,000 tasks, Cognizant found that 93% of jobs are already impacted by AI ... with $4.5 trillion in U.S. labor value potentially automatable today.But here’s the twist: AI isn’t replacing entire jobs. On average, only 39% of a role’s tasks can be automated. The future isn’t AI alone: it’s humans plus AI. But will it be fewer humans?In this episode of TechFirst, host John Koetsier sits down with Babak Hodjat, CTO of Cognizant, to unpack:• Why construction and transportation are seeing surprising AI growth• Why programming jobs may have hit an automation plateau• What “agentic AI” actually means — and why it matters• How management roles are more automatable than we thought• The rise of vibe coding and democratized software creation• Why compute power — not ideas — may be the biggest bottleneckWe also explore how companies can safely capture AI’s upside, why training matters more than ever, and what happens when digital twins, LLMs, and human expertise combine.This isn’t hype. It’s a data-driven look at where AI is actually changing work right now.⸻👤 GuestBabak HodjatCTO, Cognizant🌐 https://www.cognizant.com⸻If you want clear, grounded conversations about AI, innovation, and the future of work, subscribe here:👉 https://techfirst.substack.com⸻⏱ Chapters00:00 Is AI Going to Take Your Job?00:40 Cognizant’s AI Report: 93% of Jobs Impacted01:05 Biggest Surprises from the Data02:30 Why Programming & Math Hit a Plateau03:30 The Limits of LLMs04:45 Construction & Transportation: Unexpected AI Growth06:05 Agentic AI and Real-World Automation07:05 39% of Jobs Automatable: Humans + AI08:15 AI in Management and Executive Roles09:05 Scenario Planning and Digital Twins11:30 $4.5 Trillion in Automatable U.S. Labor13:30 Global Impact and Compute Limitations15:30 The Data Center Rush & AI Infrastructure16:15 How Companies Should Realize AI Value17:00 Training, Skilling, and Safe AI Adoption17:40 Cognizant’s Vibe Coding World Record19:00 The Future of Vibe Coding & Software Development20:15 Final Thoughts on the AI Shift
AI models are powerful, but they don’t forget. And that's a problem.They hallucinate. They inherit bias. They absorb sensitive data. And once they’re trained, fixing those issues is painfully expensive. Retraining takes weeks and maybe tens of millions of dollars. And any guardrails the AI company puts up are brittle.What if you could perform surgery on the model itself?In this episode of TechFirst, John Koetsier sits down with Ben Luria, co-founder of Hirundo, to explore machine unlearning, a new approach that selectively removes unwanted data, behaviors, and vulnerabilities from trained AI systems.Hirundo claims it can:• Cut hallucinations in half• Massively reduce bias• Reduce successful prompt injection attacks by over 90%• Do it in under an hour on a single GPU• Preserve benchmark performanceInstead of adding more guardrails, machine unlearning works inside the model, identifying problematic weights, isolating behavioral vectors, and surgically removing risks without degrading quality.If AI is going mainstream in enterprises, it needs a remediation layer. Is machine unlearning the missing piece?⸻GuestBen LuriaCo-Founder, HirundoNhirhttps://www.hirundo.io⸻Topics Covered• Why AI models “can’t forget”• The difference between hallucinations and inaccuracies• Why guardrails aren’t enough• How prompt injection works — and how to reduce it• Removing PII and noncompliant training data• AI security at the model level• Why machine unlearning could become standard by 2030⸻If you’re building, deploying, or investing in AI, this is a conversation you can’t miss.👉 Subscribe for more deep dives into AI, innovation, and the future of tech:https://techfirst.substack.com⸻⏱ Chapters00:00 – Why We Need Machine Unlearning01:12 – What Is Machine Unlearning?03:40 – Why AI Can’t “Forget” (The Pink Elephant Problem)06:15 – Guardrails vs True Model Remediation09:05 – The Wild West of AI Data & Legal Risk11:20 – How Machine Unlearning Works (Detection, Isolation, Remediation)16:10 – Performing “Neurosurgery” on LLMs19:30 – Hallucinations vs Inaccuracies Explained23:45 – Reducing Prompt Injection by 90%28:30 – Working with AI Labs & Enterprises32:00 – Will Unlearning Become Standard by 2030?34:15 – Final Thoughts
Large language models have dominated the AI conversation — but are small language models (SLMs) actually the future?In this episode of TechFirst, host John Koetsier sits down with Andy Markus, SVP & Chief Data and AI Officer at AT&T, to unpack how small language models are delivering enterprise-grade accuracy at a fraction of the cost and latency of massive LLMs.Andy explains how AT&T uses SLMs for:• Contract analysis at massive scale• Network analytics and outage root-cause analysis • Fraud detection and enterprise knowledge systems• AI-driven “field coding” and agent-based workflowsThey also dive into the rise of agentic AI, how structured “archetypes” replace risky vibe coding, and why the future of software development may be humans supervising autonomous AI systems rather than writing every line of code.If you’re building AI for real-world, high-scale use cases — especially in enterprise environments — this conversation is essential.⸻GuestAndy MarkusSVP & Chief Data and AI Officer, AT&TFormer SVP at Time Warner Media⸻👉 Subscribe for more deep dives on AI, technology, and the future of innovation:https://techfirst.substack.com⸻00:00 – Why the future of AI might be small00:55 – What is a small language model (SLM)?01:45 – From LLM hype to enterprise reality02:25 – Solving accuracy, cost, and latency at once03:05 – How small is “small”? Parameters explained03:55 – Where SLMs work best inside enterprises04:45 – Contract analysis and enterprise vector stores05:35 – Network analytics and outage root-cause analysis06:45 – AI as a super-charged network engineer07:35 – Choosing high-ROI AI use cases08:20 – 4× ROI: measuring real business impact09:00 – AI field coding vs risky vibe coding10:10 – Archetypes, super agents, and structured AI workflows11:15 – What software engineers still need to do12:10 – From punch cards to natural language programming13:10 – Human-in-the-loop vs autonomous AI agents14:10 – How small can models really get?15:10 – Responsible AI at enterprise scale16:00 – The future of agentic AI and autonomy17:10 – Why AI output is finally becoming predictable18:10 – Final thoughts on where AI is headed
Humanoid robots are coming into our homes, but they probably won’t be doing your laundry anytime soon.In this episode of TechFirst, host John Koetsier sits down with Jan Liphardt, founder & CEO of OpenMind and Stanford bioengineering professor, to unpack what home robots will actually do in the near future ... and why the “labor-free home” vision is mostly a myth (for now).Jan explains why hands are still one of the hardest unsolved problems in robotics, why folding laundry is far harder than it looks, and why the most valuable early use cases for home robots aren’t chores at all. Instead, we explore where robots are already delivering real value today:• Health companionship and fall detection for aging parents• Personalized education for kids, beyond screens• Home security that respects privacy• And why people form emotional bonds with robots faster than expectedWe also dive into OM1, OpenMind’s open-source, AI-native operating system for robots, and why openness, transparency, and configurability will matter deeply as robots move from factories into our living rooms.If you’re curious about the real future of humanoid robots — what’s hype, what’s possible today, and what’s coming next — this conversation is for you.🎙 GuestJan LiphardtFounder & CEO, OpenMindStanford Professor of BioengineeringWebsite: https://openmind.com⸻👉 Subscribe for more conversations on AI, robotics, and the future of technology:https://techfirst.substack.com⸻00:00 Intro: The promise of humanoid robots at home00:40 Meet Jan Liphardt and OpenMind’s OM101:12 Why your “labor droid” isn’t here yet01:41 The “hand problem” and what robots can realistically do now03:07 Why economics matters: $300/hour tasks vs. laundry and dishes04:19 Robot hands today: reliability, repairability, and washing hands05:16 LG’s laundry-folding demo and why fabric is still hard06:16 Hospitals and hygiene: why “robot hand-washing” is unsolved07:41 Hands as a separate system: compute, sensors, and integration08:31 Why wheeled humanoids exist: hands first, body second09:26 The real home use cases today: security, education, companionship10:08 Aging in place: fall detection and remote nurse escalation11:30 Real-world stories: parents living alone and why this matters11:54 Privacy tradeoffs: robots vs. always-on home cameras12:52 AIBO and why people get attached to mobile robots13:52 Self-charging and the “my mom won’t plug it in” problem14:21 Beyond falls: autism support and memory care15:27 The education use case: “do my homework” vs. teach me16:26 Personalized learning: what current classrooms miss17:51 Why robot teachers beat screens for younger kids18:46 Home security basics: unfamiliar face detection + alerts19:15 Adding sensors: smoke, fire, sound, and anomaly detection19:41 Quadrupeds vs. humanoids: cost, simplicity, and mobility20:01 Safety issue: pinch hazards and kids hugging robots20:46 What’s next for home labor robots21:43 Why OM1 must be open source: transparency and trust23:39 Why ROS 2 isn’t enough for human environments24:37 OM1 approach: LLM-centric “Lego blocks” for robot behavior25:43 Open-source humanoids for kids and why ownership matters27:41 What’s missing: simulation is the bottleneck28:11 Gazebo/Isaac Sim pain and the need for realistic sims29:57 Why voice + “digital humans” matter in simulation30:47 Tipping points: factories, warehouses, robotaxis, and humanoids35:46 Wrap-up and final thoughts
AI is hitting entertainment like a sledgehammer ... from algorithmic gatekeepers and AI-written scripts to digital actors and entire movies generated from a prompt.In this episode of TechFirst, host John Koetsier sits down with Larry Namer, founder of E! Entertainment Television and chairman of the World Film Institute, to unpack what AI really means for Hollywood, creators, and the global media economy.Larry explains why AI is best understood as a productivity amplifier rather than a creativity killer, collapsing months of work into hours while freeing creators to focus on what only humans can do. He shares how AI is lowering barriers to entry, enabling underserved niches, and accelerating new formats like vertical drama, interactive storytelling, and global-first content.The conversation also dives into:• Why AI-generated actors still lack true human empathy• How studios and IP owners will be forced to license their content to AI companies• The future of deepfakes, guardrails, and regulation• Why market fragmentation isn’t a threat — it’s an opportunity• How China, Korea, and global platforms are shaping what comes next • Why writers and storytellers may be entering their best era yetLarry brings decades of perspective from every major media transition — cable, streaming, global expansion — and makes the case that AI is just the next tool in a long line of transformative technologies.If you care about the future of movies, television, creators, and culture, this is a conversation you don’t want to miss.⸻🎙 GuestLarry NamerFounder, E! Entertainment TelevisionChairman, World Film Institute⸻👉 Subscribe for more conversations on AI, media, and the future of technology:https://techfirst.substack.com⸻00:00 – AI, emotion, and the danger of “AI twins”00:00 – Welcome to Tech First + the AI disruption of entertainment00:01 – Chaos in Hollywood: Disney, Netflix, Warner Bros, and consolidation00:02 – AI as a productivity tool, not a creativity replacement00:03 – How AI gives creators back their most valuable asset: time00:04 – Regulation, guardrails, and the need for consequences00:05 – Fragmentation, niche content, and the future economics of media00:06 – Why streaming has been a gift to writers and storytellers00:06 – Disney licensing IP to AI and why it was inevitable00:07 – Contracts, actors’ rights, and why the law must catch up00:08 – Deepfakes, AI avatars, and digital celebrities00:09 – AI actors, empathy gaps, and spotting what isn’t human00:10 – Using GPT to launch a bestselling book in days00:11 – Big media M&A in an AI-driven world00:12 – Jobs AI will eliminate vs. jobs AI will create00:13 – Miniseries, deep storytelling, and why streaming changed everything00:14 – Vertical video, short-form drama, and old ideas in new formats00:15 – China vs. the West: who’s ahead in entertainment tech00:16 – Global storytelling and Game of Thrones–scale opportunities00:17 – Why Hollywood could ruin vertical video00:18 – Interactive, immersive, and branched storytelling00:19 – The future of screens, platforms, and audience choice00:20 – Why new media never replaces old media00:20 – Final thoughts on abundance, choice, and creativity
Robots aren’t just software. They’re AI in the physical world. And that changes everything.In this episode of TechFirst, host John Koetsier sits down with Ali Farhadi, CEO of Allen Institute for AI, to unpack one of the biggest debates in robotics today: Is data enough, or do robots need structured reasoning to truly understand the world?Ali explains why physical AI demands more than massive datasets, how concepts like reasoning in space and time differ from language-based chain-of-thought, and why transparency is essential for safety, trust, and human–robot collaboration. We dive deep into MOMO Act, an open model designed to make robot decision-making visible, steerable, and auditable, and talk about why open research may be the fastest path to scalable robotics.This conversation also explores:• Why reasoning looks different in the physical world• How robots can project intent before acting• The limits of “data-only” approaches• Trust, safety, and transparency in real-world robotics• Edge vs cloud AI for physical systems• Why open-source models matter for global AI progressIf you’re interested in robotics, embodied AI, or the future of intelligent machines operating alongside humans, this episode is a must-watch.👤 GuestAli FarhadiCEO, Allen Institute for AI (AI2)Professor, University of WashingtonFormer Apple researcher⸻👉 Subscribe for more conversations like this: https://techfirst.substack.com⸻00:00 – Plato vs Aristotle… in robotics?00:55 – What “reasoning” means in the physical world02:10 – How humans predict actions before they happen03:45 – Why physical AI is fundamentally different from text AI04:50 – The next revolution: AI in the real world05:30 – What is MOMO Act?06:20 – Chain-of-thought… for robots07:45 – Trajectories as reasoning and robot transparency08:55 – Trust, safety, and correcting robots mid-action10:15 – Why predictability builds trust in machines11:40 – What’s broken with data-only AI approaches13:10 – Why reasoning + data isn’t an “either/or”14:00 – Open sourcing robotics models: why it matters15:20 – How closed AI slows innovation16:45 – Global competition and open research17:40 – What’s next for robotics reasoning models18:20 – Can these models work across robot types?19:30 – Temporal and spatial reasoning in MOMO 220:40 – Scaling robotics vs scaling LLMs21:10 – Edge vs cloud AI for robots22:20 – Specialized models, latency, and privacy23:00 – Final thoughts on the future of physical AI
Can we really build a $10,000 humanoid robot on open-source AI?In this episode of TechFirst, John Koetsier talks with Chris Kudla, CEO of Mind Children, about a radically different approach to humanoid robots. Instead of six-figure industrial machines built for factories or war zones, Mind Children is building small, safe, friendly social robots designed for kids, classrooms, and elder care.Meet Cody (MC-1), their first humanoid prototype. Cody is built on open-source AI from SingularityNET, combined with modular hardware, low-torque actuators, and a wheeled base designed for safety, affordability, and mass production. And there's some other AI bits and pieces from all the big name companies that you'd recognize.Mind Children's goal is ambitious: a $10,000 humanoid robot that families, schools, and care facilities can actually afford.In this conversation we explore:• Why social robots may be the real gateway to embodied AI• How Cody is designed for children and elder care instead of factories• Why wheels beat bipedal legs for safety, cost, and stability• How open-source AI and modular software stacks enable faster innovation • The emotional and ethical challenges of building companion robots• And what it takes to bring a humanoid robot to market at scaleThis is not sci-fi. This is the early blueprint of a future where humanoid robots are personal, affordable, and open-source.00:00 – The $10,000 open-source humanoid question01:58 – Meet Cody, the MC-1 prototype04:10 – Why Cody is small, child-sized, and approachable06:55 – Designing humanoids for kids and elder care09:45 – Social robots vs industrial humanoids12:40 – Wheels instead of legs and why that matters16:05 – Low-torque actuators, safety, and toy-like design19:20 – Modular hands, arms, and future upgrades22:10 – Open-source AI and SingularityNET’s role25:30 – On-robot vs cloud AI and why it matters28:40 – Vision, LiDAR, and simulated world models32:10 – Emotional awareness and social intelligence35:10 – The $10K target and mass-production strategy38:15 – The risks of attachment to robot companions40:00 – Final thoughts on Cody and the future of social robots
Is AI really the new UI, or is that just another tech buzzphrase? Or ... is AI actually EVERY user interface now?In this episode of TechFirst, host John Koetsier sits down with Mark Vange, CEO & founder of Automate.ly and former CTO at Electronic Arts, to unpack what happens when interfaces stop being fixed and start being generated on the fly.They explore:• Why generative AI makes it cheaper to create custom interfaces per user• How conversational, auditory, and adaptive experiences redefine “UI”• When consistency still matters (cars, safety systems, frontline work)• Why AI doesn’t replace workers — but radically reshapes workflows• Whether browsers should become AI-native or stay neutral canvases• The unresolved risks around AI agents, payments, and controlFrom hospitals using AI to speak Haitian Creole, to compliance forms that drop from hours to minutes, this conversation shows how every experience can become intelligent, contextual, and helpful.👉 If you care about product design, AI, UX, or the future of software, this episode is for you.Subscribe for more conversations like this:https://techfirst.substack.com⸻👤 GuestMark VangeCEO & Founder, Automate.lyFormer CTO, Electronic ArtsInvestor, serial entrepreneur, and builder focused on intent-driven, AI-native software⸻⏱️ Chapter Markers 00:00 – Is AI the New UI?Why generative interfaces are reigniting the UI conversation02:10 – The Hidden Cost of Traditional InterfacesWhy one-size-fits-all software limits users04:20 – When UIs Are Generated on DemandAdaptive experiences vs fixed screens and buttons06:15 – Conversational & Multimodal InterfacesWhy voice, audio, and language are all “UI”08:30 – When Consistency Still MattersSafety, muscle memory, and shared interface conventions10:45 – How Generative UIs Change WorkAI as a collaborator, not a replacement13:05 – Making Every Page an ApplicationWhy “dumb forms” and static sites are disappearing15:10 – The Browser as the Ultimate InterfaceNeutral canvases vs AI-controlled environments17:10 – AI Agents, Payments, and ControlWhy money is the hardest unsolved AI problem19:25 – The Future of Multimodal UIWhy UI goes far beyond pixels and screensIs AI really the new UI — or is that just another tech buzzphrase?
The web is turning agentic. And that changes everything from shopping to search to SEO.In this episode of TechFirst, John Koetsier sits down with Dave Anderson (VP at ContentSquare + host of the “Tech Seeking Human” podcast) to unpack what happens when browsers and AI assistants don’t just answer … they do stuff. For you. On your behalf.From Atlas and agentic browsing to the growing backlash from retailers (hello, Amazon vs Perplexity), we explore who benefits, who loses, and what the internet becomes when agents are the default user.You’ll hear why retailers are nervous (security, margins, coupon hunting), why agent-first experiences might create “headless” retailers (like ghost kitchens, but for ecommerce), and why search is shifting from SEO to AI visibility. Plus: real talk about trusting agents with your credit card, hallucinations, and what it means if your agent can look indistinguishable from you.GuestDave Anderson — VP, ContentSquarehttps://contentsquare.comPodcast: Tech Seeking Humanhttps://www.techseekinghuman.aiLinks & subscribeSubscribe for more conversations on tech, AI, and what’s next: https://techfirst.substack.comTranscripts always available herehttps://johnkoetsier.com00:00 Agentic web: what changes when browsers “do stuff”00:59 Meet Dave Anderson (VP + podcast host)01:31 30,000 feet: why “agents” suddenly matter03:48 The agent future John wanted 10 years ago04:21 Why Amazon doesn’t want your agent shopping on Amazon05:07 Ticketmaster, bots, and the security nightmare06:26 Siri’s original promise vs today’s reality08:31 Are agents just bots… or something different?10:04 Retail fears: coupon hunting, margins, returns chaos11:21 Can you trust an agent with your credit card?11:59 Why retailers want their own agents (and control)13:14 Amazon’s agent works… but is it the whole internet?14:19 Ghost kitchens for retail: “headless” agent-first brands15:17 Hugo Boss jacket test: agents vs manual search16:40 Agents should talk to your finance agent17:14 Kids + deepfakes: what even looks real anymore?18:04 Is this corrosive to apps… or the web?19:10 Online identity, anonymity, and agent verification20:28 Two futures: human-first brands vs agent-first retail21:19 Agentic browsers on your device: can they “look like you”?22:51 Baseball vs golf: the best analogy for search now24:44 Instant shopping problem: returns + missing “services layer”26:10 AI weirdness: wrong names, wrong locations, shifting behavior27:37 Agents beyond shopping: support is the sleeper win29:49 Inventing the future: who adopts agents and who won’t31:13 Will people get tired of AI and crave humans again?31:45 Serendipity vs optimization: the restaurant debate32:36 Wrap: nobody solved agents… but the shift is real
AI has mastered language, sort of. But the real world is way messier.In this episode of TechFirst, John Koetsier sits down with Kirin Sinha, founder and CEO of Illumix, to explore what comes after large language models: world models, spatial intelligence, and physical AI.They unpack why LLMs alone won’t get us to human-level intelligence, what it actually takes for machines to understand physical space, and how technologies born in augmented reality are now powering robotics, wearables, and real-world AI systems.This conversation goes deep on: • What “world models” really are — and why everyone from Fei-Fei Li to Jeff Bezos is betting on them • Why continuous video and outward-facing cameras are so hard for AI • The perception stack behind robots and smart glasses • Edge vs cloud compute — and why latency and privacy matter more than ever • How AR laid the groundwork for the next generation of physical intelligenceIf you’re building or betting on robotics, smart wearables, AR, or physical AI, this episode explains the infrastructure shift that’s already underway.GuestKirin SinhaFounder & CEO, Illumixhttps://www.illumix.com👉 Subscribe for more deep conversations on technology, AI, and the future:https://techfirst.substack.com00:00 Raising the Bar on “Smart” Devices01:07 Meet Kirin, Founder & CEO of Illumix01:21 What Is a World Model — and Why It Matters02:23 Why LLMs Alone Won’t Lead to AGI03:46 From AR & the Metaverse to Physical AI05:18 AR vs VR vs the Metaverse — Different Problems, Different Futures06:32 Spatial Perception, Scene Understanding, and Contextual Intelligence07:39 Why Continuous Video Is So Hard for Machines08:39 The Camera Flip: From Selfie AI to World-Facing AI09:58 Why Cameras Beat LiDAR for Wearables and Robots10:27 Inside the Perception Stack11:20 Edge vs Cloud Compute in Physical AI12:37 Why On-Device Intelligence Matters for UX13:52 SLMs, Efficiency, and the Limits of “Bigger Is Better”15:11 Knowing What to Run — and When16:06 Intent, Memory, and Real-Time AI Decisions17:32 Physical Intelligence vs Digital Intelligence18:39 Memory Palaces, Spatial Brains, and Human AI19:39 Do We Need New Chips for Humanoid Robots?20:26 How Chip Architectures Will Evolve for Physical AI21:47 Privacy, On-Device Processing, and Trust22:48 Final Thoughts on the Future of World-Aware AI
Quantum computers usually mean massive machines, cryogenic temperatures, and isolated data centers. But what if quantum computing could run at room temperature, fit inside a server rack — or even a satellite?In this episode of TechFirst, host John Koetsier sits down with Marcus Doherty, Chief Science Officer of Quantum Brilliance, to explore how diamond-based quantum computers work — and why they could unlock scalable, edge-deployed quantum systems.Marcus explains how nitrogen-vacancy (NV) centers in diamond act like atomic-scale qubits, enabling long coherence times without extreme cooling. We dive into quantum sensing, quantum machine learning, and why diamond fabrication — including the world’s first commercial quantum diamond foundry — could be the key to manufacturing quantum hardware at scale.You’ll also hear how diamond quantum systems are already being deployed in data centers, how they could operate in vehicles and satellites, and what the realistic roadmap looks like for logical qubits and real-world impact over the next decade.Topics include: • Why diamonds are uniquely suited for quantum computing • How NV centers work at room temperature • Quantum sensing vs. quantum computing • Manufacturing challenges and timelines • Quantum computing at the edge (satellites, vehicles, sensors) • The future of hybrid classical-quantum systems⸻🎙 GuestMarcus DohertyChief Science Officer, Quantum BrillianceProfessor of Quantum PhysicsArmy Reserve Officer🌐 https://quantumbrilliance.com⸻👉 Subscribe for more deep dives into the future of technology:https://techfirst.substack.com⸻00:00 Diamonds and the next wave of quantum computing01:20 Why diamond qubits work at room temperature03:20 NV centers explained: defects that behave like atoms05:05 How diamonds replace massive quantum isolation systems06:40 Building the world’s first quantum diamond foundry08:30 Defect-free diamonds, isotopes, and qubit engineering10:15 Quantum sensing vs. quantum computing with diamonds12:40 From desktop quantum systems to millions of qubits14:25 Roadmap: logical qubits, timelines, and scale16:10 Quantum computers at the edge: vehicles and satellites18:10 Quantum machine learning and real-world deployments19:50 The long game: why diamond quantum computing scales
Will AI kill your job?What happens to your job as AI gets smarter and companies keep laying people off even while profits rise? Will you still have a job? Will the job you have change beyond recognition?Scary questions, no?In this episode of TechFirst, host John Koetsier sits down with Nikki Barua, co-founder of Footwork and longtime founder, executive, and resiliency expert, to unpack what work really looks like in the age of AI.Layoffs are no longer just about economic downturns. Companies are growing, innovating, and still cutting staff, often because AI is enabling more output with less capacity. So what does that mean for you?Nikki argues the future doesn’t belong to those who simply “learn AI tools,” but to agentic humans: people who lead with uniquely human strengths and use AI to amplify their impact. This conversation explores:• Why today’s layoffs are different from past cycles• How AI is compressing jobs before creating new ones• What it means to move from doing work to directing outcomes• Why identity, curiosity, and agency matter more than certifications• How to rethink workflows instead of chasing shiny AI tools• The FLIP framework: Focus, Leverage, Influence, and PowerThis episode isn’t about fear. It’s about reinvention. If you’re wondering how to stay relevant, valuable, and resilient as AI reshapes work, this is the place to start.GuestNikki BaruaCo-founder, Footwork(Reinventing organizations with agentic AI)👉 Subscribe for more conversations on AI, work, and the future of technology:https://techfirst.substack.comChapters:00:00 — Work in the AI Age: what happens to your job?01:05 — Layoffs, AI, and why this cycle feels different02:55 — “Don’t let AI have the last laugh”04:45 — Profitable companies cutting jobs: what’s really happening06:40 — The next 18–24 months: compression before reinvention08:30 — AI’s impact on young workers and early careers10:00 — What should you be doing right now?11:20 — Why surface-level AI use won’t save your job12:40 — The rise of the “agentic human”14:20 — From doing to directing: humans + machines as partners15:55 — Why certifications and training aren’t enough17:10 — High-agency people win in the AI age18:35 — The FLIP framework: Focus and identity20:00 — Leverage: compounding capacity beyond automation21:20 — Influence: trust, authenticity, and scaled impact22:25 — Power: upgrading your personal operating system23:40 — Two shifts that make this AI revolution different25:05 — Tools vs workflows: where most people get it wrong26:25 — The real blocker: old identities and fear of change27:40 — Three steps to stay relevant in the AI age28:40 — Final thoughts + wrap-up
What if someone actually built TARS from Interstellar—and discovered it really could work?In this episode of TechFirst, host John Koetsier sits down with Aditya Sripada, a robotics engineer at Nimble, who turned a late-night hobby into a serious research project: a real, working mini-version of TARS, the iconic robot from Interstellar.Aditya walks through why TARS’s strange, flat form factor isn’t just cinematic flair—and how it enables both walking and rolling, one of the most energy-efficient ways for robots to move. We dive into leg-length modulation, passive dynamics, rimless wheel theory, and why science fiction quietly shapes real robotics more than most engineers admit.Along the way, Aditya explains what he learned by challenging his own assumptions, how the project connects to modern humanoid and warehouse robots, and why reliability—not flash—is the hardest problem in robotics today. He also previews his next ambitious project: building a real-world version of Baymax, exploring soft robotics and safer human-robot interaction.This is a deep, accessible conversation at the intersection of science fiction, physics, and real-world robotics—and a reminder that sometimes the ideas we dismiss as “impossible” just haven’t been built yet.⸻GuestAditya SripadaRobotics Engineer, NimbleResearcher in legged locomotion, humanoids, and unconventional robot form factors⸻If you enjoyed this episode, subscribe for more deep dives into technology, robotics, and innovation:👉 https://techfirst.substack.com⸻Chapters:00:00 – TARS in Real Life: Why Interstellar’s Robot Still Fascinates Us01:00 – Why Building TARS Seemed Physically Impossible02:00 – From Weekend Hobby to Serious Robotics Research03:00 – How Science Fiction Quietly Shapes Real Robot Design04:00 – Walking vs Rolling: Why TARS Uses Both05:00 – Why Simple Robots Can Beat Complex Humanoids06:00 – Turning Legs into a Wheel: The Rolling Mechanism Explained07:00 – Leg-Length Modulation and Passive Dynamics08:00 – Inside the Actuators: Degrees of Freedom and Compact Design09:00 – Why TARS’s Arms Don’t Really Make Sense10:30 – Lessons Learned: Never Dismiss “Impossible” Ideas12:00 – Rimless Wheels, Gaits, and Robotics Theory13:00 – What This Project Taught Him at Nimble14:00 – What “Super-Humanoid” Robots Actually Mean15:30 – Why Reliability Matters More Than Flashy Demos16:30 – TARS as a Research Platform, Not a Product17:30 – From TARS to Baymax: Exploring Soft Robotics19:00 – Can We Build Safer, Friendlier Humanoid Robots?20:30 – What’s Next: Recreating Baymax in Real Life21:30 – Final Thoughts and Wrap-Up
AI is already reshaping the workforce. What about teenagers?Turns out, they might be more impacted than anyone else. After all, they're usually in low-skill entry-level jobs that AI can replace. The problem ... teens are losing their first experience with working, making money, and establishing an identity outside of their homes.In this episode of TechFirst, host John Koetsier speaks with Karissa Tang, a high school senior and UCLA research assistant, about her new study on how AI will impact teen employment. While most workforce studies focus on adults, Karissa analyzed the top 10 most popular teen jobs from cashiers to fast food workers and found something alarming: AI could reduce teen employment by nearly 30% by 2030.We dig into:• Which teen jobs are most vulnerable to AI and automation• Why cashiers and fast-food counter workers are hardest hit• The role of self-checkout, kiosks, and robots like Flippy• Which teen jobs appear safest (for now)• Why teens may be even more exposed to AI than adults• What schools, policymakers, and teens themselves can do nextThis is a must-watch conversation for parents, students, educators, and policymakers trying to understand how AI is reshaping early work experiences—and what it means for the next generation.🎙 GuestKarissa Tang• Founder, Booted (board games company)• Research Assistant, UCLA• Former Intern, NSV Wolf Capital• High school senior and author of a 20-page research paper on AI & teen employment📌 Subscribe & Stay AheadIf you want clear, thoughtful analysis on AI, technology, and the future of work, subscribe to TechFirst:👉 https://techfirst.substack.com00:00 – Will AI Kill Teen Jobs?01:35 – Why a Teen Studied Teen Employment03:10 – The Shocking 30% Job Loss Prediction05:10 – Top 10 Teen Jobs Most at Risk07:20 – Cashiers, Kiosks, and Self-Checkout09:40 – Fast Food, Retail, and AI Displacement12:15 – Which Teen Jobs Are Safest from AI15:05 – Robots Like Flippy and the Future of Cooking Jobs18:00 – Why Teen Jobs Are More Vulnerable Than Adult Jobs21:40 – The Importance of Human Interaction at Work25:10 – What Inspired the Research Study29:30 – How the Data and Methodology Worked33:40 – What Teens Can Do to Stay Employable37:30 – Skills, AI Literacy, and Creating New Opportunities41:00 – Final Thoughts on the Future of Teen Work




