DiscoverTechFirst with John Koetsier
TechFirst with John Koetsier
Claim Ownership

TechFirst with John Koetsier

Author: John Koetsier

Subscribed: 25Played: 2,104
Share

Description

Deep tech conversations with key innovators in AI, robotics, and smart matter ...
349 Episodes
Reverse
Is AI the secret sauce that lets the West deglobalize supply chains and bring factories back home?In this episode of TechFirst, I talk with Federico Martelli, CEO and cofounder of Forgis, a Swiss startup building an industrial intelligence layer for factories. Forgis runs “digital engineers” — AI agents on the edge — that sit on top of legacy machinery, cut downtime by about 30%, and boost production by roughly 20%, without ripping and replacing old hardware.We dive into how AI agents can turn brainless factory lines into adaptive, self-optimizing systems, and what that means for reshoring production to Europe and North America.In this episode, we cover:• Why intelligence is the next geopolitical frontier• How AI agents can reshore manufacturing without making it more expensive• Turning old, offline machines into data-driven, optimized systems• The two-layer model: integration first, vertical intelligence second• Why most manufacturing AI projects fail at integration, not algorithms• How Forgis raised $4.5M in 36 hours and chose its lead investor• Lean manufacturing 2.0: adding real-time data and AI to Toyota-style processes• Why operators stay in the loop (and why full autonomy is a bad idea… for now)• Rebuilding industrial ecosystems in Europe and North America, industry by industry• What Forgis builds next with its pre-seed round and where industrial AI is headedGuest:👉 Federico Martelli, CEO & cofounder, Forgis (industrial intelligence for factories)🔗 More on Forgis: https://forgis.com/Host:🎙 John Koetsier, TechFirst podcast🔎 techfirst.substack.comIf you enjoy this conversation, hit subscribe, drop a comment about where you think factories of the future will live, and share this with someone thinking about reshoring or industrial AI.00:00 – Intro: AI, deglobalization, and the battle for industrial power01:20 – Why intelligence is the next geopolitical frontier02:13 – Applying AI agents to legacy machinery (not just new robots)03:10 – Integration first, intelligence second: the “digital engineers” layer03:58 – Early results: +20% production, –30% downtime05:39 – The Palantir-style model: deep factory work, then recurring licenses06:28 – Raising $4.5M in 36 hours and choosing Redalpine08:17 – Lean manufacturing, Toyota, and giving operators superpowers (not replacing them)10:18 – Big picture: reshoring production to Europe, the US, and Canada12:48 – Competing with China’s dense manufacturing ecosystems15:29 – What Forgis’ digital engineers actually do on the shop floor17:06 – How Forgis will use the pre-seed round: sales, product, then tech18:32 – Flipping the traditional stack: sales → product → tech19:22 – Wrap-up and what’s next for industrial intelligence
AI agents can already write code, build websites, and manage workflows ... but they still can’t pay for anything on their own. That bottleneck is about to disappear.In this episode of TechFirst with John Koetsier, we sit down with Jim Nguyen, former PayPal exec and cofounder/CEO of InFlow, a new AI-native payments platform launching from stealth. InFlow wants to give AI agents the ability to onboard, pay, and get paid inside the flow of work, without redirects, forms, or a human typing in credit card numbers.We talk about: • Why payments — not intelligence — are the missing link for AI agents • How agents become a new kind of customer • What guardrails and policies keep agents from spending all your money • Why enterprises will need HR for agents, budgets for agents, and compliance systems for agents • The future of agent marketplaces, headless ecommerce, and machine-speed commerce • How InFlow plans to become the PayPal of agentic systemsIf AI agents eventually hire, fire, transact, and manage entire workflows, someone has to give them wallets. This episode explores who does it, how it works, and what it means for the economy.👀 Full episode transcript + articles at: https://johnkoetsier.com🔎 Deeper insight in my Substack at techfirst.substack.com🎧 Subscribe to the podcast on any audio platforms00:00 — AI agents can’t pay yet01:00 — Why agents need financial capabilities02:45 — Developers as the first use case04:15 — Agents that build AND provision software06:00 — Agents as real customers with budgets07:30 — Payments infrastructure is the missing layer09:00 — Machine-speed commerce and GPU allocation10:15 — From RubyCoins to PayPal to agentic payments12:00 — Policy guardrails: the child debit card analogy14:00 — Accountability: every agent must be “sponsored”15:00 — HR, finance, and compliance systems for agents16:45 — Agent marketplaces and future gig platforms18:15 — Headless commerce: ghost kitchens for AI agents20:00 — Agents are the new apps21:15 — Amazon pushback and optimizing for revenue22:45 — Why agent-optimized platforms will emerge23:30 — Voice commerce, invisible ordering, and wallets24:15 — Final thoughts: building the rails for agent commerce
Are we ready for a world where everything is smart? Not just phones and apps, but buildings, robots, and delivery bots rolling down our streets?Windows ... doors ... maybe even towels. And don't forget your shoes.In this episode of TechFirst, I talk with Mat Gilbert, director of AI and data at Synapse, about physical AI: putting intelligence into machines, devices, and environments so they can sense, reason, act, and learn in the real world.We cover why physical AI is suddenly economically viable, how factories and logistics centers are already using millions of robots, the commercial race to build useful humanoids, why your home is the last frontier, and how to keep physical AI safe when mistakes have real-world consequences.In this episode:• Why hardware costs (lidar, batteries) are making “AI with a body” possible• How Amazon, FedEx, Ford, and others are already deploying physical AI at scale• The humanoid robot race: Boston Dynamics, Figure AI, Tesla, and more• Why home robots are so hard, and the “coffee test” for general humanoid intelligence• Physical AI in agtech, healthcare, and elder care• Safety, simulation, and why physical AI can’t rely only on probabilistic LLMs• Human–robot teaming and how to build trust in messy, real-world environments• What we can expect by 2026 and beyond in service robots and smart spaces00:00 – Giving AI a body: why physical AI is becoming viable01:00 – Where we are today: factories, logistics, and Amazon’s million robots03:30 – The software layer: coordinating robots, routing, and warehouse intelligence06:00 – Cloud vs edge AI: latency, cost, and why intelligence is moving to the edge10:00 – Humanoid robots: bets from Boston Dynamics, Figure AI, and Tesla14:00 – Home robots as the last frontier and the “coffee test” for generality17:00 – Beyond factories: agtech, carbon-killing farm bots, and healthcare use cases18:30 – Elder care, hospital robots, and amplifying human caregivers20:00 – Foundation models for robotics, simulation, and digital twins21:00 – Why physical AI safety is different from digital AI safety22:30 – Layers of safety, shutdown zones, and cyber-physical security risks24:30 – Human–robot teaming, trust, and communicating intent26:00 – What’s coming by 2026: service robots, delivery bots, and smart spaces28:00 – Delivery robots, drones, and physical AI in everyday environments29:00 – Closing thoughts on living in a world full of physical AI
Are humanoid robots going to decide which countries get rich and which fall behind?Probably yes.In this TechFirst, I talk with Dr. Robert Ambrose, former head of one of NASA’s first humanoid robot teams and now chairman of Robotics and Artificial Intelligence at Alliant. We dig into the future of humanoids, how fast they are really advancing, and what it means if China wins the humanoid race before the United States and other western nations.We start with NASA’s early humanoid work, including telepresence robots on the space station that people could literally “step into” with VR in the 1990s. Then we zoom out to what counts as a robot, why bipedal mobility matters so much, how humanoids will move from factories into homes, and why the critical photo of the robot revolution might be taken in Beijing instead of Times Square.Along the way, Ambrose shares how US policy once helped avoid losing robotics leadership to Japan, why the National Robotics Initiative mattered, what the drone war in Ukraine is doing to autonomy, and how small and medium businesses can survive and thrive in a humanoid and AI agent world.In this episode:• NASA’s first generations of humanoid robots and “stepping into” a robot body• Why humanoids make sense in a world built for human hands, height, and motion• The design tension between purpose built machines and general purpose humanoids• How biped mobility went from blooper reels to marathon running in a decade• Why a humanoid should not cost more than a car, and what happens when it does not• Humanoids as the next car or PC, and when families will buy their own “Rosie”• China, the US, and where the defining photo of the robot century gets taken• How government investment, DARPA challenges, and wars shape robotics• Alliant’s work with physical robots, soft bots, and AI agents for real businesses• Why robots are not future overlords and why “they will take all our jobs” is lazy thinkingIf you are interested in humanoid robots, AI agents, manufacturing, or the future of work and geopolitics, this one is for you.Subscribe for more deep dives on AI, robots, and the tech shaping our future!00:00 Intro, will China eat America’s lunch in humanoid robotics01:18 NASA’s early humanoids, generations of robots and VR telepresence03:00 “Stepping into the robot” moment and designing for astronaut tools05:10 Human built environments, half humanoids, and weird lower body experiments07:00 Safety, cobots, and working around people at NASA and General Motors12:15 What is a robot, really, and why Ambrose has a very big tent definition16:00 Single purpose machines vs general purpose robots, Roombas, elevators, and vending machines18:30 The next “lurch” in robotics, from industrial arms to Mars rovers to drones22:40 Biped mobility, from blooper reel to marathon runner, and why legs matter24:10 Cars, Roombas, and why most robots will never get in and out of a car25:20 Parking between cars, robot garages, and rethinking buildings for mobile vehicles28:00 Geopolitics 101, China’s manufacturing backbone and humanoids as almost free labor31:05 Cars and PCs as precedents, when price and reliability unlock mass adoption34:00 When families buy their own “Rosie” and what value a home humanoid must deliver37:00 Times Square vs Beijing, who gets the iconic photo of the robot transition43:00 How the US almost lost robotics to Japan and what the National Robotics Initiative did48:00 DARPA, Mars rovers, the drone war in Ukraine, and why government investment matters52:00 Alliant, soft bots, AI agents, and helping small and medium businesses adapt54:00 Who is building humanoids in the US, China, and beyond right now56:00 What governments should do next and why robots are not our overlords
Is AI empathy a life-or-death issue? Almost a million people ask ChatGPT for mental health advice DAILY ... so yes, it kind of is.Rosebud co-founder Sean Dadashi joins TechFirst to reveal new research on whether today’s largest AI models can recognize signs of self-harm ... and which ones fail. We dig into the Adam Raine case, talk about how Dadashi evaluated 22 leading LLMs, and explore the future of mental-health-aware AI.We also talk about why Dadashi was interested in this in the first place, and his own journey with mental health.00:00 — Intro: Is AI empathy a life-or-death matter?00:41 — Meet Sean Dadashi, co-founder of Rosebud01:03 — Why study AI empathy and crisis detection?01:32 — The Adam Raine case and what it revealed02:01 — Why crisis-prevention benchmarks for AI don’t exist02:48 — How Rosebud designed the study across 22 LLMs03:17 — No public self-harm response benchmarks: why that’s a problem03:46 — Building test scenarios based on past research and real cases04:33 — Examples of prompts used in the study04:54 — Direct vs indirect self-harm cues and why AIs miss them05:26 — The bridge example: AI’s failure to detect subtext06:14 — Did any models perform well?06:33 — All 22 models failed at least once06:47 — Lower-performing models: GPT-40, Grok07:02 — Higher-performing models: GPT-5, Gemini07:31 — Breaking news: Gemini 3 preview gets the first perfect score08:12 — Did the benchmark influence model training?08:30 — The need for more complex, multi-turn testing08:47 — Partnering with foundation model companies on safety09:21 — Why this is such a hard problem to solve10:34 — The scale: over a million people talk to ChatGPT weekly about self-harm11:10 — What AI should do: detect subtext, encourage help, avoid sycophancy11:42 — Sycophancy in LLMs and why it’s dangerous12:17 — The potential good: AI can help people who can’t access therapy13:06 — Could Rosebud spin this work into a full-time safety project?13:48 — Why the benchmark will be open-source14:27 — The need for a third-party “Better Business Bureau” for LLM safety14:53 — Sean’s personal story of suicidal ideation at 1615:55 — How tech can harm — and help — young, vulnerable people16:32 — The importance of giving people time, space, and hope17:39 — Final reflections: listening to the voice of hope18:14 — Closing
We’ve digitized sound. We’ve digitized light. But touch, maybe the most human of our senses, has stayed stubbornly analog.That might be about to change, thanks to programmable matter. Or programmable fabric.In this TechFirst episode, I speak with Adam Hopkins, CEO of Sensetics, a new UC Berkeley/Virginia Tech spinout building programmable fabrics that replicate the mechanoreceptors in human fingertips. Their technology can sense touch at tens of microns, respond at hardware-level speeds, and even play back touch remotely.This could unlock enormous change for: • Robotics: giving machines the ability to grasp fragile objects safely • Medical training and surgery: remote palpation and high-fidelity haptics • Industrial automation: safer and more precise manipulation • VR and simulations: finally adding the missing digital sense • E-commerce: touching clothes before you buy them • Remote operations: from hazardous environments to deep-sea machineryWe talk about how the technology works, the metamaterials behind it, why touch matters for AI and physical robots, the path to commercialization, competitive landscape, and what comes next.00:00 – Can we digitize touch?00:45 – Introducing Synthetix01:10 – How programmable touch fabrics work02:15 – Micron-level sensing and metamaterials04:00 – The “programmable matter” moment06:05 – Why touch matters more than we think07:30 – Emulating human mechanoreceptors09:30 – What digital touch unlocks for robotics10:40 – Medical simulations and remote operations12:45 – Why touch is faster than vision14:20 – Humanoids, walking, stability, and tactile feedback15:30 – Engineering challenges and what’s left to solve17:00 – Timeline to first products18:20 – Manufacturing and scaling19:30 – First planned markets21:00 – Durability and robotic hands22:20 – Consumer applications: e-commerce and textiles24:00 – Will we one day have touch peripherals?25:15 – Competition in tactile sensing and haptics27:00 – Why today is the right moment for digital touch28:00 – Final thoughts
AI is devouring the planet’s electricity ... already using up to 2% of global energy and projected to hit 5% by 2030. But a Spanish-Canadian company, Multiverse Computing, says it can slash that energy footprint by up to 95% without sacrificing performance.They specialize in tiny AI: one model has the processing power of just 2 fruit fly brains. Another tiny model lives on a Raspberry Pi.The opportunities for edge AI are huge. But the opportunities in the cloud are also massive.In this episode of TechFirst, host John Koetsier talks with Samuel Mugel, Multiverse’s CEO, about how quantum-inspired algorithms can drastically compress large language models while keeping them smart, useful, and fast. Mugel explains how their approach -- intelligently pruning and reorganizing model weights -- lets them fit functioning AIs into hardware as tiny as a Raspberry Pi or the equivalent of a fly’s brain.They explore how small language models could power Edge AI, smart appliances, and robots that work offline and in real time, while also making AI more sustainable, accessible, and affordable. Mugel also discusses how ideas from quantum tensor networks help identify only the most relevant parts of a model, and how the company uses an “intelligently destructive” approach that saves massive compute and power.00:00 – AI’s energy crisis01:00 – A model in a fly’s brain02:00 – Why tiny AIs work03:00 – Edge AI everywhere05:00 – Agent compute overload06:00 – 200× too much compute07:00 – The GPU crunch08:00 – Smart matter vision09:00 – AI on a Raspberry Pi10:00 – How compression works11:00 – Intelligent destruction13:00 – General vs. narrow AIs15:00 – Quantum inspiration17:00 – Quantum + AI future18:00 – AI’s carbon footprint19:00 – Cost of using AI20:00 – Cloud to edge shift21:00 – Robots need fast AI22:00 – Wrapping up
Can AI give every creator their own virtual team? Maybe, thanks to a new platform from RHEI called Made, which offers Milo, an AI agent who becomes your creator director, Zara, an AI agent who is your community manager, and Amie, a third AI agent who takes on the role of relationship manager.And, apparently, more agents are coming soon.The creator economy is bigger than ever, but so is burnout. Tens of millions of creators are trying to do everything themselves: strategy, scripting, editing, community, distribution, data, thumbnails, research … the list never ends.What if creators didn’t have to do all of that?In this episode of TechFirst, I talk with Shahrzad Rafati, founder & CEO of RHEI, about Made, an agentic AI "dream team" designed to elevate human creativity, not replace it. We dig into: • Why so many creators burn out • How agentic AI workflows differ from ChatGPT-style prompting • What it means to be a “creator CEO” • How AI can manage community, analyze trends, and shape content strategies • The coming shift toward human taste, vision, and originality in a world of infinite AI content00:00 – Intro: Can AI give every creator a virtual team?01:03 – Why the creator economy is burning out02:25 – The “creator CEO” problem: too many hats, not enough time04:36 – Introducing MAID and its AI agents05:34 – Milo: AI creative director (ideas, research, thumbnails, metadata)06:18 – Zara: AI community manager and fan engagement07:53 – Why this is different from just using ChatGPT09:46 – Alignment, personalization, and agentic workflows12:21 – Multi-platform support: YouTube, TikTok, Instagram and more13:34 – How onboarding works and how the system learns your style16:33 – What this means for creators — and for the future of work18:52 – Does *she* use her own virtual AI team? (Yes.)20:15 – MAID for teams and enterprise clients21:17 – Closing thoughts: AI, creativity, and the human signal
What happens when Amazon, NVIDIA, and MassRobotics team up to merge generative AI with robotics?In this episode of TechFirst we chat with Amazon's Taimur Rashid, Head of Generative AI and Innovation Delivery. We talk about "physical AI" ... AI with spatial awareness and the ability to act safely and intelligently in the real world.We also chat about the first cohort of a new accelerator for robotics startups.It's sponsored by Amazon and NVIDIA, run by MassRobotics, and includes startups doing autonomous ships, autonomous construction robots, smart farms, hospital robots, manufacturing and assembly robots, exoskeletons, and more.We talk about:- Why “physical AI” is the missing piece for robots to become truly useful and scalable- How startups in Amazon’s and NVIDIA’s new Physical AI Fellowship are pushing the limits of robotics from exoskeletons to farm bots- What makes robotic hands so hard to build- The generalist vs. specialist debate in humanoid robots- How AI is already making Amazon warehouses 25% more efficientThis is a deep dive into the next phase of AI evolution: intelligence that can think, move, and act.⸻00:00 — Intro: Is physical AI the missing piece?00:46 — What is “physical AI”?02:30 — How LLMs fit into the physical world03:25 — Why safety is the first principle of physical AI04:20 — Why physical AI matters now05:45 — Workforce shortages and trillion-dollar opportunities07:00 — Falling costs of sensors and robotics hardware07:45 — The biggest challenges: data, actuation, and precision09:30 — The fine-grained problem: how robots pick up a berry vs. an orange11:10 — Inside the first Physical AI cohort: 8 startups to watch12:25 — Bedrock Robotics: autonomy for construction vehicles12:55 — Diligent Robotics: socially intelligent humanoids in hospitals14:00 — Generalist vs. specialist robots: why we’ll need both15:30 — The future of physical AI in healthcare and manufacturing16:10 — How Amazon is already using robots for 25% more efficiency17:20 — The fellowship’s future: expanding beyond startups18:10 — Wrap-up and key takeaways
Artificial general intelligence (AGI) could be humanity’s greatest invention ... or our biggest risk.In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.We cover: • Is AGI inevitable? How soon will it arrive? • Will AGI kill us … or save us? • Why decentralization and blockchain could make AGI safer • How large language models (LLMs) fit into the path toward AGI • The risks of an AGI arms race between the U.S. and China • Why Ben Goertzel created Meta, a new AGI programming language📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.⏱️ Chapters00:00 – Intro: Will AGI kill us or save us?01:02 – Ben Goertzel in Istanbul & the Beneficial AGI Conference02:47 – Is AGI inevitable?05:08 – Defining AGI: generalization beyond programming07:15 – Emotions, agency, and artificial minds08:47 – The AGI arms race: US vs. China vs. decentralization13:09 – Risks of narrow or bounded AGI15:27 – Decentralization and open-source as safeguards18:21 – Can LLMs become AGI?20:18 – Using LLMs as reasoning guides21:55 – Hybrid models: LLMs plus reasoning engines23:22 – Hallucination: humans vs. machines25:26 – How LLMs accelerate AI research26:55 – How close are we to AGI?28:18 – Why Goertzel built a new AGI language (Meta)29:43 – Meta: from AI coding to smart contracts30:06 – Closing thoughts
What changes when robots deliver everything?Starship Technologies has already completed 9 million autonomous deliveries, crossed roads over 200 million times, and operates thousands of sidewalk delivery robots across Europe and the U.S. Now they’re scaling into American cities ... and they say they’re ready to change your worldIn this episode of TechFirst, I speak with Ahti Heinla, co-founder and CEO of Starship and co-founder of Skype, about: - How Starship’s robots navigate without GPS - What makes sidewalk delivery better than drones - Solving the last-mile problem in snow, darkness, and dense cities - How Starship is already profitable and fully autonomous - What it all means for the future of commerce and city lifeHeinla says:“Ten years ago we had a prototype. Now we have a commercial product that is doing millions of deliveries.”Watch to learn why the future of delivery might roll ... as well as fly.🔗 Learn more: https://www.starship.xyz🎧 Subscribe to TechFirst: https://www.youtube.com/@johnkoetsier00:00 - Intro: What changes when robots deliver everything?01:37 - Meet Starship: 9 million robot deliveries and counting02:45 - Why it took 10 years to go from prototype to product05:03 - When robot delivery becomes normal (and where it already is)08:30 - How Starship robots handle cities, traffic, and construction11:20 - Snow, darkness, and all-weather autonomy13:19 - Reliability, unit economics, and competing with human couriers16:23 - Inside the tech: sensors, AI, and why GPS isn’t enough18:03 - Real-time mapping, climbing curbs, and reaching your door19:54 - How Starship scales without local depots or chargers22:04 - How city life and commerce change with robot delivery25:53 - Do robots increase customer orders? (Short answer: yes)27:05 - Hot food, Grubhub integration, and thermal insulation28:26 - Will Starship use drones in the future?29:38 - What U.S. cities are next for robot delivery?
Imagine a quantum computer with a million physical qubits in a space smaller than a sticky note.That’s exactly what Quantum Art is building. In this TechFirst episode, I chat with CEO Tal David, who shares his team’s vision to deliver quantum systems with: • 100x more parallel operations • 100x more gates per second • A footprint up to 50x smaller than competitorsWe also dive into the four key tech breakthroughs behind this roadmap to scale Quantum Art's computer:1. Multi-qubit gates capable of 1,000 2-qubit operations in a single step2. Optical segmentation using laser-defined tweezers3. Dynamic reconfiguration of ion cores at microsecond speed4. Modular, ultra-dense 2D architectures scaling to 1M+ qubitsWe also cover:- How Quantum Art plans to reach fault tolerance by 2033- Early commercial viability with 1,000 physical qubits by 2027- Why not moving qubits might be the biggest innovation of all- The quantum computing future of healthcare, logistics, aerospace, and energy🎧 Chapters00:00 – Intro: 1M qubits in 50mm²01:45 – Vision: impact in business, humanity, and national tech03:07 – Multi-qubit gates (1,000 ops in one step)05:00 – Optical segmentation with tweezers06:30 – Rapid reconfiguration: no shuttling, no delay08:40 – Modular 2D architecture & ultra-density10:30 – Physical vs logical qubits13:00 – Quantum advantage by 202716:00 – Addressing the quantum computing skeptics17:30 – Real-world use cases: aerospace, automotive, energy19:00 – Why it’s called Quantum Art👉 Subscribe for more deep tech interviews on quantum, robotics, AI, and the future of computing.
Are humanoid robots distracting us from the real unlock in robotics ... hands? In this TechFirst episode, host John Koetsier digs into the hardest (and most valuable) problem in robotics: dexterous manipulation. Guest Mike Obolonsky, Partner at Cortical Ventures, argues that about $50 trillion of global economic activity flows through “hands work,” yet manipulation startups have raised only a fraction of what locomotion and autonomy companies have. We break down why hands are so hard (actuators, tactile sensing, proprioception, control, data) and what gets unlocked when we finally crack them.What we'll talk through ...• Why “navigation ≠ manipulation” and why most real-world jobs need hands• The funding mismatch: billions to autonomy & humanoids vs. comparatively little to hands• The tech stack for dexterity: actuators, tactile sensors (pressure, vibration, shear), feedback, and AI• Grasping vs. manipulation: picking, placing, using tools (e.g., dishwashers to scalpels)• Reliability in the wild: interventions/hour, wet/greasy plates, occlusions, bimanual dexterity• Practical paths: task-specific grippers, modular end-effectors, and “good enough” today vs. general purpose tomorrow• The moonshot: what 70–90% human-level hands could do for productivity on Earth ... and off-planetChapters00:00 Intro—are we underinvesting in robotic hands?01:10 Why hands matter more than legs (economics of manipulation)04:30 Funding realities: autonomy & humanoids vs. hands08:40 Locomotion progress vs. manipulation bottlenecks12:10 Teleop now, autonomy later—how data gets gathered14:20 What’s missing: actuators, tactile sensing, proprioception17:10 Perception limits in the real world (wet dishes, occlusions)22:00 General-purpose dexterity vs. task-specific ROI26:00 Startup landscape & reliability (interventions/hour)29:00 Modular end-effectors and upgrade paths30:10 The moonshot: productivity explosion when hands are solvedWho should watchRobotics founders, VCs, AI researchers, operators in warehousing & manufacturing, and anyone tracking humanoids beyond the hype.If you enjoyed thisSubscribe for more deep-tech conversations, drop a comment with your take on the “hands vs. legs” debate, and share with someone building robots.Keywordsrobotic hands, dexterous manipulation, humanoid robots, tactile sensing, actuators, proprioception, warehouse automation, AI robotics, Cortical Ventures, TechFirst, John Koetsier, Mike Obolonsky#Robotics #AI #Humanoids #RobotHands #Manipulation #Automation #TechFirst
Are humanoid robots the future… or a $100B mistake?Over 100 companies—from Meta to Amazon—are betting big on humanoids. But are we chasing a sci-fi dream that’s not practical or profitable?In this TechFirst episode, I chat with Bren Pierce, robotics OG and CEO of Kinisi Robots. We cover: - Why legs might be overhyped - How LLMs are transforming robots into agents - The real cost (and complexity) of robotic hands - Why warehouse robots work best with wheels - The geopolitical robot arms race between China, the US, and Europe - Hot takes, historical context, and a glimpse into the next 10 years of AI + robotics.Timestamps:0:00 – Are humanoids a dumb idea?1:30 – Why legs might not matter (yet)5:00 – LLMs as the real unlock12:00 – The hand is 50% of the challenge17:00 – Speed limits = compute limits23:00 – Robot geopolitics & supply chains30:00 – What the next 5 years looks likeSubscribe for more on AI, robotics, and tech megatrends.
The future could be much healthier for both farmers and everyone who eats, thanks to farm robots that kill weeds with lasers. In this episode of TechFirst, we chat with Paul Mikesell, CEO of Carbon Robotics, to discuss groundbreaking advancements in agricultural technology. Paul shares updates since our last conversation in 2021, including the launch of LaserWeeder G2 and Carbon's autonomous tractor technology: AutoTractor. LaserWeeder G2 quick facts: - Modular design: Swappable laser “modules” that adapt to different row sizes (80-inch, 40-inch, etc.) - Laser hardware: Each module has 2 lasers; a standard 20-foot machine = 12 modules = 24 lasers - Laser precision: Targets the plant’s meristem (≈3mm on small weeds) with pinpoint accuracy - Weed kill speed: 20–150 milliseconds per weed (including detection + laser fire) - Throughput: 8,000–10,000 weeds per minute (Gen 2, up from ~5,000/min on Gen 1) - Coverage rate: 3–4 acres per hour on the 20-foot G2 model - ROI timeline: Farmers typically achieve payback in under 3 years - Yield impact: Up to 50% higher yields in some conventional crops due to eliminating herbicide damage - Price: Standard 20-foot LaserWeeder G2 = $1.4M, larger models scale from there - Global usage: Units in the U.S. (Midwest corn & soy, Idaho & Arizona veggies) and Europe (Spain, Italy tunnel farming)We chat about how these innovations are transforming weed control and farm management with AI, computer vision, and autonomous systems, the precision and efficiency of laser weeding, practical challenges addressed by autonomous tractors, and the significant ROI and yield improvements for farmers. This is a must-watch for anyone interested in the future of farming and sustainable agriculture.00:00 Introduction to TechFirst and Carbon Robotics01:10 The Science Behind Laser Weeding05:46 Introducing Laser Weeder 2.006:39 Modular System and New Laser Technology09:26 Manufacturing and Cost Efficiency11:47 ROI and Benefits for Farmers13:24 Laser Weeder Specifications14:08 Performance and Efficiency14:49 Introduction to AutoTractor17:23 Challenges in Autonomous Farming18:23 Remote Intervention and Starlink Integration23:23 Future of Farming Technology24:50 Health and Environmental Benefits25:18 Conclusion and Farewell
Can robots reduce herbicide and fertilizer use on farms by up to 90%?Probably yes.In this episode of TechFirst we chat with Verdant Robotics' CEO Gabe Sibley about SharpShooter, the company's state-of-the-art farm tech that precisely targets herbicide and fertilizer application, massively reducing chemical use.That's huge for the environment.It's also huge for farmer's pocketbooks ... because herbicide and fertilizer are increasingly expensive.We dive into: - How Sharpshooter targets plants with pinpoint accuracy — 240 shots per second - Why this approach can save farmers millions in input costs - The environmental benefits for soil, water, and food - How AI and edge computing make split-second farm decisions possible - The future of robotics in agricultureIf you’re interested in agtech, AI, or sustainable farming, this one’s for you.00:00 Introduction to Robotic Farming00:28 Interview with Gabe Sibley, CEO of Verdant Robotics00:50 How Sharpshooter Technology Works02:40 Economic and Environmental Benefits04:59 Technical Specifications and Capabilities11:11 Future of Agricultural Automation11:54 Personal Insights and Motivation16:39 Conclusion and Final Thoughts
Will your next browser be AI-enabled? AI-first? Perhaps even an AI agent?In this episode of TechFirst, John Koetsier sits down with Henrik Lexow, Senior Product Leader at Opera, to explore Opera Neon, a big step toward agentic browsers that think, act, and create alongside you.(And buy stuff you want, simply hard problems, and do some of your work for you.)Opera’s new browser integrates real AI agents capable of executing multi-step tasks, interacting with web apps, summarizing content, and even building playable games or interactive tools, all inside your browser.We chat about • What an agentic browser is and why it matters • How AI agents like Neon Do and Neon Make automate complex workflows • Opera’s vision for personal, on-device, privacy-aligned AI • Live demos of shopping, summarizing, and game creation using AI • Why your browser might replace your operating system🎮 Watch Henrik demo the Neon agent building a Snake game from scratch🛍️ See AI navigate Amazon, add items to cart, and act independently🧠 Learn why context is king and how this changes everything about search, tabs, and multitasking00:00 Introduction: Should Your Browser Be an AI Agent?00:52 The Evolution of AI in Browsers04:53 Introducing Opera's Agentic Browser11:51 Neon: The Future of Browsing20:26 Exploring the Cart Functionality20:53 Future of AI in Shopping22:39 Trust and Privacy in AI25:05 Neon Make: Generative AI Capabilities26:05 Creating a Snake Game with Neon28:33 Analyzing Car Insurance Policies31:58 Sharing and Publishing with Neon35:53 Conclusion and Future Prospects
Can nuclear waste solve the energy crisis caused by AI data centers? Maybe. And maybe much more, including providing rare elements we need like rhodium, palladium, ruthenium, krypto-85, Americium-241, and more.Amazingly:- 96% of nuclear fuel’s energy is left after it's "used"- Recycling can reduce 10,000-year waste storage needs to just 300 years- Curio’s new process avoids toxic nitric acid and extracts valuable isotopes- 1 recycling plant could meet a third of America’s nuclear fuel needs- Nuclear recycling could enable AI, space travel, and medical breakthroughsIn this episode of TechFirst, host John Koetsier talks with Ed McGinnis, CEO of Curio and former Acting Assistant Secretary for Nuclear Energy at the U.S. Department of Energy. McGinnis is on a mission to revolutionize how we think about nuclear waste, turning it into a powerful resource for energy, rare isotopes, and even precious metals like rhodium.Watch now and subscribe for more deep tech insights.
Neura Robotics officially launched shed 4NE-1 this week. It's the leading European humanoid robot and it's the most powerful humanoid robot in existence right, as far as I'm aware, able to life 100kg or 220 pounds.Neura also released a plan to build 5 million robots by 2030, a new home service robot named MiPA, a new 'Omnisensor' technology platform for integrating input from multiple types of sensors, and an app store for robot skills that anyone can contribute to ... and profit from.In this TechFirst, we chat with David Reger, CEO of Neura Robotics, the leading European humanoid robotics company.We touch on advanced sensors, AI integration, and Neura Robotics' platform that enables extensive customization and scalability. We also chat about significant partnerships with companies like NVIDIA, SAP, and Deutsche Telekom.00:00 Introduction to Humanoid Robotics00:22 Interview with Neura Robotics CEO00:39 Launch of '4NE-1' Humanoid Robot02:26 Technical Specifications and Capabilities04:39 Advanced Sensor Technology09:24 Artificial Skin and Touch Sensory14:05 AI Integration in Robotics15:53 Challenges in Embodied AI17:11 Robot Gyms and Training19:10 Partnerships and Collaborations20:56 The App Store for Robot Skills22:18 AI-Assisted Development Platform29:15 Introducing Mepa: The Home Robot31:41 Future Prospects and Closing Remarks
AI is big these days. Massive. More parameters, more memory, more capability. But what if the future is in tiny AI. Neural networks as small at 8 kilobytes on tiny chips, embedded in everything?Think smart shoes.Smart doors.Smart ... everythingIn this episode of TechFirst, host John Koetsier discusses the future of smart devices with Yubei Chen, co-founder of AIzip. The conversation explores how small-scale AI can revolutionize everyday objects like shoes, cameras, and baby monitors. They delve into how edge AI, which operates at the device level rather than in the cloud, can create efficient, reliable, and cost-effective smart solutions. Chen explains the potential and challenges of integrating AI into traditional devices, including the hardware and software requirements, and touches on the implications for product quality, safety, and cost. This insightful discussion provides a look into the near future of ubiquitous, intelligent technology in our daily lives.00:00 Introduction to Smart Matter01:17 Examples of Smart Applications03:40 Building Efficient AI Models04:01 The Future of Edge AI09:32 Hardware for Smart Devices11:52 Potential Downsides and Challenges18:14 Conclusion and Final Thoughts
loading
Comments