Discover
AI Tinkerers - "One-Shot"
AI Tinkerers - "One-Shot"
Author: Joe Heitzeberg
Subscribed: 0Played: 2Subscribe
Share
© (c) AI Tinkerers, All Rights Reserved
Description
AI Tinkerers "One-Shot" takes you 1:1 with AI practitioners, software engineers, and tech entrepreneurs around the world -- the best of the AI Tinkerers global network. Each session includes live demos of real AI projects, detailed code walkthroughs, and unscripted discussions led by a technical host who explores practical applications and implementation challenges. As an AI builder, you'll gain actionable insights into emerging tools, techniques, and use cases, plus opportunities to connect with a global network of peers working on similar problems.
33 Episodes
Reverse
Description:Sam Hesson — technical founder and newly minted member of Meta's AI incubation team — joins the AI Tinkerers One-Shot podcast for a deep-dive into the custom CI/CD pipeline he built to orchestrate parallel AI agents at scale. Sam became notorious for spending $50,000 on tokens in just two months while pushing the limits of what agentic development systems can do. In this episode, he shares his complete architecture.Sam walks through his "Token Abundance Mindset" and explains why moving beyond single-turn prompting is the key to unlocking a fully automated, multi-agent development workflow. From structuring your codebase for AI readiness to running adversarial LLM judges that self-heal pull requests, this is one of the most technically advanced conversations we've had on the show.Topics covered in this episode:The Darwinian CI/CD — running multiple parallel agents in competition to produce the best pull requestLLM Judge — rubric-based quality control and adversarial agents for self-healing PRsAI Dev Readiness — DB seeding, snapshots, and codebase prep for reliable agent testingThe Walk Flow — converting high-level conversations (via Meta Ray-Bans) into structured PRDs and Tech SpecsDevPlan — generating detailed, unambiguous coding prompts that eliminate guesswork for agentsTimestamps: 00:00 Introduction: Sam Hesson & Claude Maxing 02:17 The Alpha of 10x Agentic Systems 03:30 The $50k Token Spend & Abundance Mindset 07:25 The Walk Flow: Ray-Bans to PRD/Tech Spec 11:17 AI Adherence to Implementation Patterns 15:21 Playwright & AI Codebase Readiness 17:35 The Darwinian CI/CD: Parallel Agents 21:47 LLM Judge & Rubric-Based Self-Healing 31:03 Bruno API Client: Code-Like Specs 32:47 Self-Healing PRs & Agentic Graph RAG 38:40 DevPlan: Turnkey Structured Prompting 58:43 The Future of SDLC & AI Agent AdoptionResources:Sam Hesson on LinkedIn: linkedin.com/in/samhessenauerNanome.ai (Sam's former company)AI Tinkerers: aitinkerers.orgOne-Shot Podcast: aitinkerers.org/podcastAI Tinkerers One-Shot is a podcast for builders and innovators defining the future of AI — going under the hood with the people actually shipping it.
In this conversation, Eric Simons, founder and CEO of StackBlitz, walks through one of the fastest and most consequential pivots in modern developer tooling. After nearly seven years building deep browser infrastructure and reaching roughly $700k in ARR, the company reoriented around Bolt, effectively defining the vibe-coding category and scaling past $15M ARR in a matter of months. We go under the hood of the WebAssembly-based architecture that lets Bolt run a full Node.js environment directly in the browser, delivering near-instant feedback and fundamentally different unit economics than cloud-hosted VMs. Eric explains the specific model breakthrough that made full-stack, one-shot app generation viable, and why this moment reordered who actually builds software inside companies.00:00 Introduction00:36 The 7-Year History and Pivot Point01:01 The Original WebAssembly Vision 03:20 Near Bankruptcy: $700k ARR 10:49 Sonnet 3.5: The Vibe Coding Unlock 14:23 The $15 Million ARR Board Meeting 17:19 The Web Container/WASM Advantage 22:07 Bolt Demo: Full-Stack App from a Prompt 31:09 The Changing Role of PMs and Designers 36:23 Mitigating AI-Generated Code Security Risks 47:31 The Entrepreneurial Mindset 53:35 What Eric is Tinkering With Now
In this episode of AI Tinkerers One-Shot, Joe sits down with Andrew Baker—serial builder, former Twilio engineer, and hands-on experimenter in agentic systems—to explore the rapidly evolving frontier of browser automation and AI-driven agents.Andrew shares how his journey began with simple scripting experiments and gradually evolved into sophisticated browser agents capable of handling complex, real-world workflows. One standout example: an airline seat selector that used browser agents to secure optimal seats for frequent flyers—highlighting both the power and the limitations of today’s tooling.Along the way, Andrew breaks down the practical challenges builders face when working with browser agents at scale:• Vision model accuracy and UI interpretation• DOM complexity and brittle page structures• Authentication hurdles and session persistence• The real economics of running large-scale automationsThe conversation then shifts to “Claude Draws,” Andrew’s playful yet technically impressive side project that brings the classic 90s app Kid Pix into the age of AI. He explains how he wired up a remote PC, streamed sound output, and carefully crafted prompts that allow Anthropic’s browser agent to control a nostalgic art application—brushes, stamps, chaos, and all. The result is both a technical deep dive and a reminder that creativity is often where agentic tooling shines most.Joe and Andrew also zoom out to examine the broader ecosystem shaping the future of browser-native agents. They discuss why UI accessibility matters for agents, how frameworks like Stagehand and Playwright are transforming automation workflows, and why personal evaluation benchmarks are becoming essential for builders pushing these systems beyond demos and into real usage.💡 Resources & LinksAndrew Baker: https://www.linkedin.com/in/andrewtorkbakerAI Tinkerers: https://aitinkerers.orgAndrew’s newsletter: https://implausible.aiWhat you’ll learn• How browser automation evolved from basic scripts to autonomous agents• Why DOM parsing, vision models, and page structure still trip up agents• How Claude for Chrome was used to control a web-based Kid Pix experience• The architecture behind remote execution, sound streaming, and automation hacks• How Stagehand and Playwright support modern browser automation• The technical, economic, and ethical considerations shaping the future of browser agentsChapters00:00:15 — Introduction and AI Tinkerers Community02:49 — Twilio Origins and Browser Automation Journey04:50 — Building the Airline Seat Selector07:51 — Browser Agent Challenges and Vision Models10:44 — Stagehand Framework and Browser Automation Stack13:28 — Claude for Chrome and Authentication16:58 — Kid Pix Origins and Demo Setup21:33 — Technical Architecture and Playwright Tricks29:24 — Evaluation Platform and Personal Benchmarks37:42 — Future of Browser Agents and Web EconomicsSubscribe for more conversations with the builders shaping the future of AI, automation, and agentic systems.
What happens when a lifelong tinkerer turns curiosity into two major AI companies? In this episode of AI Tinkerers One Shot, Joe talks with Lukas Biewald—founder of Weights & Biases and CrowdFlower—about how early projects like robot cars and Raspberry Pi experiments shaped his engineering mindset and entrepreneurial path.Chapters:00:00 — Intro & Guest Background 02:07— Early Tinkering and CrowdFlower 03:06— Building Robot Cars and Meeting Pete Warden 08:41— From Tinkering to Weights & Biases 12:27— Parenting, Vibe Coding, and Kids as Makers 21:56— 3D Printing and Creative Play 24:25— AI Tools, Team Structure, and Company Growth 26:43— Agentic Coding: Opportunities and Challenges 35:40— AI in Production: Observability and Real-World Use Cases 49:28— The Future of AI, Fine-Tuning, and RLhttps://youtu.be/S84CjOrlMcY
In this episode of AI Tinkerers One-Shot, Joe sits down with Steve Yegge—engineer and creator of the Beads framework—to explore how open source tools are transforming the way we build with AI. Steve shares the story behind Beads, a new framework that gives coding agents memory and task management, enabling them to work longer, smarter, and more autonomously. From his days at Amazon and Google to leading engineering at Sourcegraph, Steve reveals how Beads is already reshaping developer workflows and why it’s gaining hundreds of contributors in just weeks. What you’ll learn: - How Beads gives coding agents “session memory” and lets them manage complex, multi-step projects. - Why Steve believes the future of engineering is about guiding and supervising AI—rather than just writing code. - The evolution from chaotic markdown files to structured, issue-based workflows. - Techniques for multimodal prompting, automated screenshot validation, and “landing the plane” for session cleanup. - The challenges and breakthroughs in deploying AI tools at scale within organizations. - How Beads and similar frameworks are making it easier for both junior and senior developers to thrive in the age of AI. Whether you’re a developer, tinkerer, or just curious about the next wave of AI-assisted coding, this deep dive with Steve Yegge will show you what’s possible now—and what’s coming next. 💡 Resources: Beads – https://github.com/steveyegge/beads Steve Yegge – https://www.linkedin.com/in/steveyegge/ & https://x.com/Steve_Yegge AI Tinkerers – https://aitinkerers.org Subscribe for more conversations with the builders shaping the future of AI and robotics! 00:00 - Introduction to Steve Yegge and Beads Framework 02:10 - Steve's Background and Source Graph AMP 08:00 - Building a React Game Client with AI Agents 15:36 - Multimodal Prompting and Screenshot Validation 23:16 - Code Review Techniques and Agent Confidence 32:01 - The Evolution of Beads: From Markdown Chaos to Issue Tracking 43:11 - Landing the Plane: Automated Session Cleanup 52:09 - Deploying AI Tools in Organizations 58:59 - Code Review Bottlenecks and Graphite Solution 01:02:57 - Closing Thoughts on AI-Assisted Development
What if your home robot didn’t just clean, but felt alive — learning, adapting, and becoming part of your family?In this episode of AI Tinkerers One-Shot, Joe talks with Axel Peytavin, Co-founder & CEO of Innate, about his mission to create robots that aren’t just functional, but truly responsive companions. From his early start coding at age 11 to building one of the first GPT-4 Vision-powered robots, Axel shares how his team is creating an open-source robotics kit and one of the first agentic frameworks for robots — giving developers the tools to teach, customize, and build the next generation of embodied AI.What you’ll learn:- Why Axel believes “robots that feel alive” are the future — beyond flashy demos of backflips and kung fu.- How Innate is making robotics accessible with an open-source hardware and SDK platform.- The breakthroughs (and roadblocks) in fine motor manipulation, autonomy, and real-time learning.- How teleoperation, deep learning, and reinforcement learning are shaping the next era of household robots.- Axel’s vision for robots as companions: cleaning, tidying, assisting — and even calling for help in emergencies.Whether you’re a tinkerer, developer, or just curious about how soon robots will fold your laundry, this deep dive shows what’s possible now — and what’s coming next.💡 Resources:- Innate Robotics – https://innate.bot/- Axel Peytavin’s Twitter – https://x.com/ax_pey/- AI Tinkerers – https://aitinkerers.orgSubscribe for more conversations with the builders shaping the future of AI and robotics!0:00 Axel’s mission — building robots that feel alive00:57 The open-source kit that lets any tinkerer train new behaviors05:00 Why applied mathematics is the foundation for AI + robotics08:17 Early projects: Minecraft plugins with 200K+ downloads11:04 Innate’s vision for teachable household robots12:01 Why fine-motor manipulation is the real breakthrough, not backflips15:19 How deep learning is driving rapid robotics progress17:11 Teleoperation as the engine for data collection and training23:21 Why tidying up, laundry, and dishes are the killer apps for home robots32:24 Live teleoperation demo of Maurice in action36:08 Breaking down the system architecture — Wi-Fi, WebSockets, Python SDK41:40 Maurice shows delicate fine-motor skills with object pickup43:53 How Innate built one of the first agentic frameworks for robots49:50 The rise of an open-source robotics community around Maurice57:03 Viral GPT-4 Vision robot demo — and what it revealed about the future
Learn how to demystify large language models by building GPT-2 from scratch — in a spreadsheet. In this episode, MIT engineer Ishan Anand breaks down the inner workings of transformers in a way that’s visual, interactive, and beginner-friendly, yet deeply technical for experienced builders.What you’ll learn:• How GPT-2 became the architectural foundation for modern LLMs like ChatGPT, Claude, Gemini, and LLaMA.• The three major innovations since GPT-2 — mixture of experts, RoPE (rotary position embeddings), and advances in training — and how they changed AI performance.• A clear explanation of tokenization, attention, and transformer blocks that you can see and manipulate in real time.• How to implement GPT-2’s core in ~600 lines of code and why that understanding makes you a better AI builder.• The role of temperature, top-k, and top-p in controlling model behavior — and how RLHF reshaped the LLM landscape.• Why hands-on experimentation beats theory when learning cutting-edge AI systems.Ishan Anand is an engineer, MIT alum, and prolific AI tinkerer who built a fully functional GPT-2 inside a spreadsheet — making it one of the most accessible ways to learn how LLMs work. His work bridges deep technical insight with practical learning tools for the AI community.Key topics covered:• Step-by-step breakdown of GPT-2 architecture.• Transformer math and attention mechanics explained visually.• How modern LLMs evolved from GPT-2’s original design.• Practical insights for training and fine-tuning models.• Why understanding the “old” models makes you better at using the new ones.This episode of AI Tinkerers One-Shot goes deep under the hood with Ishan to show how LLMs really work — and how you can start building your own.💡 Resources:• Ishan Anand LinkedIn – https://www.linkedin.com/in/ishananand/• AI Tinkerers – https://aitinkerers.org• One-Shot Podcast – https://one-shot.aitinkerers.org/👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!
In this episode of AI Tinkerers Global Stage, we go deep with Steve Krenzel, founder of LogicLoop and ex-CTO office at Brex. Steve shows us how his company turns standard operating procedures (SOPs) into fully functioning APIs—complete with schema generation, test cases, structured outputs, and backtesting—within seconds.We break down:1. Why Steve avoids agentic frameworks2. How Logic automates 100K+ tasks/month for real customers3. The power of structured output for reasoning and reliability4. How prompt caching and append-only templates unlock scale5. His open-source coding agent that builds software from scratch6. How they achieved less than 2% error rates beating human teams7. His famous Prompt Engineering Guide that went viral in 2023If you’re building with LLMs, designing autonomous workflows, or just want to see what the future of developer productivity looks like—this is a must-watch.
Discover how Robert Lukoszko, CEO of Stormy AI, is building the future of AI-powered marketing by automating influencer outreach end-to-end. This interview goes deep into his journey from viral AI demos to Y Combinator, revealing critical insights for AI builders and founders.You’ll learn: • The surprising challenges and limitations of building AI applications that deeply integrate with operating systems. • Why local AI models, despite their appeal, often struggle to compete with cloud-based solutions for real-world business cases. • Robert’s unique approach to AI-assisted development, leveraging tools like Claude 3.7 for rapid prototyping and efficient coding. • How Stormy AI uses advanced AI to find niche influencers, analyze engagement, and automate outreach, transforming traditional marketing. • The strategic importance of distribution and market fit over pure technological innovation for venture-scale AI companies.Robert Lukoszko, previously co-founder of Fixkey AI (acquired) and an alumnus of Y Combinator (S24 with Stormy AI, W22 with ngrow.ai), shares his extensive experience in applying AI to new modalities and building high-growth startups.This episode of AI Tinkerers One-Shot offers a practical look at the technical and entrepreneurial realities of building in the generative AI space.💡 Resources: • Stormy AI - https://stormy.ai • Robert Lukoszko’s LinkedIn - linkedin.com/in/robert-lukoszko • AI Tinkerers - https://aitinkerers.org • One-Shot Podcast - https://one-shot.aitinkerers.org/Social Media: @AITinkerers @stormy_hq@Karmedge👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!00:00 – Introduction & Background02:38 – Visual AI, Demos & Startup Idea06:27 – Local vs. Cloud Models10:07 – Desktop AI App & Context Importance14:11 – Building the App & OS Integration23:13 – Ambient AI & Contextual Vision32:17 – Stormy AI Pivot & Demo38:35 – AI Mindset & Content Creation43:57 – AI Model Comparison & Cost
Discover how AI is bridging the communication gap between humans and dogs, unlocking deeper insights into canine emotions, intentions, and even health. In this One-Shot interview, Praful Mathur, founder of Sarama, shares his groundbreaking work on a full-stack AI system designed to interpret dog vocalizations and body language. Praful, an experienced builder in AI, reveals how his innovative approach could transform our understanding of our furry companions.What you’ll learn: • The surprising history and potential of human-animal communication through AI. • How to build custom AI models for complex, real-world data like dog vocalizations. • Praful’s unique strategy for collecting and annotating large datasets from pet owners. • The practical applications of AI in detecting subtle health issues in dogs. • How AI tools can accelerate product development, from industrial design to strategic planning.Key topics covered: • The sophistication of dog understanding vs. current AI models. • Leveraging dogs as ‘biological peripherals’ for detection. • Training original AI models on novel datasets (SVMs, KNNs, transformers). • Hardware and software architecture for real-time animal data collection. • Using AI for rapid industrial design and company building tasks.Join Joe from AI Tinkerers One-Shot as he takes a deep dive with Praful Mathur, an innovator pushing the boundaries of AI to create meaningful connections between humans and animals. This conversation explores the technical challenges and profound implications of building AI that truly understands our pets.💡 Resources: • Sarama - https://www.sarama.ai/• Sarama on IG - https://instagram.com/withsarama• Praful Mathur’s LinkedIn - linkedin.com/in/praful-mathur • AI Tinkerers - https://aitinkerers.org • One-Shot Podcast - https://one-shot.aitinkerers.org/Social Media: @AITinkerers @PrafulMathur👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!00:00 Introduction00:15 Joe’s Reflection on the Interview01:32 Introducing Praful Mathur & Sarama00:01:55 Understanding Dog Communication00:03:39 Beyond Words: Dog Communication00:04:37 Dogs as AI Peripherals00:05:27 History of Animal Communication Tech00:06:47 Sarama: What They’ve Built Today00:07:47 Sarama Hardware & Data Collection00:09:31 Sarama App & Cloud Processing00:11:30 Understanding Dog Behavior & Emotion00:12:57 Training Original AI Models for Dogs00:16:32 Multimodal Data & Sensors00:18:49 Confidence & Data Needs for Dog AI00:20:19 ML Stack & Training Approaches00:22:06 Live Demo: Dog Bark Analysis00:28:20 Dog’s Vocabulary & Alerts00:30:13 AI for Early Health Detection00:32:13 AI in Meta Process & Design00:35:04 Open Source Strategy & Data Collection00:37:18 Communicating AI Insights to Humans00:38:36 Agentic Coding Workflow00:44:19 Model Comparison: Gemini vs. Claude00:46:39 How Praful Found AI Tinkerers00:47:48 Conclusion
What you’ll learn: • How reinforcement learning can reduce AI agent error rates by up to 60% and drastically lower inference costs. • The critical difference between supervised fine-tuning and RL for agentic workflows, and why RL is essential for true agent reliability. • A practical, code-level walkthrough of building and training an email search agent that outperforms OpenAI’s GPT-3.5 on a 14-billion-parameter open-source model. • Strategies for generating high-quality synthetic data and designing nuanced reward functions with ‘partial credit’ to effectively train your agents. • Key use cases where RL fine-tuning delivers the most significant benefits, including real-time voice agents and high-volume applications.Kyle Corbett is the founder of OpenPipe, a platform dedicated to helping enterprises build and deploy customized AI models using advanced fine-tuning and reinforcement learning. He’s a seasoned builder who has been working at the frontier of fine-tuning since before public APIs existed.Key topics covered: • The limitations of off-the-shelf LLMs for agent reliability and how RL solves them. • The importance of latency and cost optimization in real-world AI deployments. • Detailed explanation of the agentic workflow and tool calling in an email search bot. • The Enron email dataset as a realistic environment for agent training. • OpenPipe’s open-source Agent Reinforcement Trainer (ART) library for building RL agents. • The iterative process of data generation, rubric-based scoring, and model updates.This episode of AI Tinkerers One-Shot goes under the hood with Kyle to share practical learnings for the community.💡 Resources: • OpenPipe Website - https://openpipe.ai • Kyle Corbett LinkedIn - https://www.linkedin.com/in/kcorbitt/ • AI Tinkerers - https://aitinkerers.org • One-Shot Podcast - https://one-shot.aitinkerers.org/Social Media: @AITinkerers @OpenPipeAI @corbtt👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!00:00 Introduction01:09 Welcome Kyle Corbett, Founder of OpenPipe01:55 What OpenPipe Does02:31 OpenPipe’s Journey and YC Experience00:04:13 Email Search Bot Project Overview00:05:19 Why Fine-Tuning for Email Search00:06:22 Email Search Bot: Queries and Results00:09:23 On-Premise Deployment and Data Sensitivity00:10:45 Agent Trace Example and Tooling00:13:55 Using the Enron Dataset00:15:13 Reinforcement Learning Fundamentals00:17:01 Synthetic Data Generation with Gemini 2.5 Pro00:18:51 Reliable Q&A Pairs and Data Scale00:21:59 Fine-Tuning Impact on Model Performance00:22:25 RL Adoption in Industry and Community00:24:37 Rollout Function and Agent Implementation00:27:52 Rubric and Reward Calculation for RL00:30:39 Training Loop and Model Updates00:33:52 RL Fine-Tuning vs. OpenAI’s Fine-Tuning00:40:38 Time Commitment for RL Projects00:41:55 Use Cases for RL Fine-Tuning00:45:37 OpenPipe’s Offerings: Open Source, White Glove Service00:47:07 Kyle’s Side Tinkering and Future of AI00:49:59 Discovering AI Tinkerers
Discover a groundbreaking approach to optimizing Large Language Models with Tomasz Kolinko, a true OG tinkerer and entrepreneur. In this One-Shot interview, Tomasz unveils his 'Effort Engine,' a novel algorithm that dynamically selects which computations are performed during LLM inference, allowing for significant speed improvements while maintaining surprising output quality. Learn how this method goes beyond traditional quantization by dynamically managing computations and even enabling partial model loading to save VRAM.Tomasz shares his unique benchmarking techniques, including the use of Kullback-Leibler divergence and heat maps, offering a new lens to understand how models behave under reduced 'effort.' This conversation provides practical insights into the underlying mechanics of AI models and offers a fully open-source project for practitioners to experiment with.💡 Resources:• Tomasz Kalinko's GitHub - https://kolinko.github.io/effort/about.html• The Basics - https://kolinko.github.io/effort/equations.html• AI Tinkerers - https://aitinkerers.org• One-Shot Podcast - https://one-shot.aitinkerers.org/Social Media Tags: @AITinkerers @kolinko👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!00:00 Introduction00:01:07 Welcome Tomasz Kalinko00:02:11 Introducing Effort Engine00:03:10 Dynamic Inference Explained00:05:56 How the Algorithm Works00:08:07 Speed vs. Quality Trade-offs00:11:37 Dynamic Weight Loading & VRAM00:15:24 Effort Engine Demo00:26:01 Model Breakdown Observations00:29:49 Architecture & Benchmarks00:32:17 Kullback-Leibler Divergence00:39:22 Heat Map Visualization00:41:07 Community & Future Work
Discover how to build powerful AI agents that integrate with your personal communication, as Luke Harris, Head of Growth at ElevenLabs, shares his journey and groundbreaking projects. Luke, a true tinkerer, reveals how he built the most popular WhatsApp MTP server and an ingenious iPhone shortcut for superior voice transcription.What you’ll learn: • The surprising gap in consumer AI and how to build solutions for it. • Practical insights into building secure AI agent systems and managing data privacy. • How ElevenLabs’ new speech-to-text (Scribe) and conversational AI APIs enable advanced voice agents. • The importance of launching personal projects and the unexpected opportunities they create. • Emerging growth areas for AI in education and niche industries.Luke Harris leads growth at ElevenLabs, a leading company in generative voice AI. With a background spanning software engineering, biotech ML, and entrepreneurship, Luke is a prolific builder and a Y Combinator alum.Key topics covered:• Building the WhatsApp MTP server and its technical challenges. • The power of Apple Shortcuts for AI-powered mobile workflows. • ElevenLabs’ Scribe model for highly accurate speech-to-text and diarization. • Designing conversational AI agents with low latency and emotional expressiveness. • AI-first workflows for product development, content creation, and sales intelligence.This episode of AI Tinkerers One-Shot goes deep under the hood with a builder at the forefront of voice AI.💡 Resources: • ElevenLabs - https://elevenlabs.io • Luke Harris’s Blog - https://harrys.co • Luke Harris’s LinkedIn - linkedin.com/in/luke-harris-ai • AI Tinkerers - https://aitinkerers.org • One-Shot Podcast - https://aitinkerers.org/podcast👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!00:00 Introduction to Luke Harris00:01:51 Luke’s Background and Growth at ElevenLabs00:04:59 The WhatsApp MTP Server Project00:11:17 Data Privacy and Security in AI Agents00:16:15 Automation vs. Human-in-the-Loop00:27:17 ElevenLabs Speech-to-Text and iPhone Shortcut00:31:34 ElevenLabs Scribe: Advanced Transcription Features00:35:03 Conversational AI and Voice Agents00:44:08 Building Voice Agents with ElevenLabs00:51:41 Unexpected Growth Drivers for Voice AI01:00:15 Luke’s AI-First Workflow and Resources01:05:22 Keeping Up with AI & AI Tinkerers Community
Discover how to build powerful AI agents that integrate with your personal communication, as Luke Harris, Head of Growth at ElevenLabs, shares his journey and groundbreaking projects. Luke, a true tinkerer, reveals how he built the most popular WhatsApp MTP server and an ingenious iPhone shortcut for superior voice transcription.What you’ll learn: • The surprising gap in consumer AI and how to build solutions for it. • Practical insights into building secure AI agent systems and managing data privacy. • How ElevenLabs’ new speech-to-text (Scribe) and conversational AI APIs enable advanced voice agents. • The importance of launching personal projects and the unexpected opportunities they create. • Emerging growth areas for AI in education and niche industries.Luke Harris leads growth at ElevenLabs, a leading company in generative voice AI. With a background spanning software engineering, biotech ML, and entrepreneurship, Luke is a prolific builder and a Y Combinator alum.Key topics covered:• Building the WhatsApp MTP server and its technical challenges. • The power of Apple Shortcuts for AI-powered mobile workflows. • ElevenLabs’ Scribe model for highly accurate speech-to-text and diarization. • Designing conversational AI agents with low latency and emotional expressiveness. • AI-first workflows for product development, content creation, and sales intelligence.This episode of AI Tinkerers One-Shot goes deep under the hood with a builder at the forefront of voice AI.💡 Resources: • ElevenLabs - https://elevenlabs.io • Luke Harris’s Blog - https://harrys.co • Luke Harris’s LinkedIn - linkedin.com/in/luke-harris-ai • AI Tinkerers - https://aitinkerers.org • One-Shot Podcast - https://aitinkerers.org/podcast👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!00:00 Introduction to Luke Harris00:01:51 Luke’s Background and Growth at ElevenLabs00:04:59 The WhatsApp MTP Server Project00:11:17 Data Privacy and Security in AI Agents00:16:15 Automation vs. Human-in-the-Loop00:27:17 ElevenLabs Speech-to-Text and iPhone Shortcut00:31:34 ElevenLabs Scribe: Advanced Transcription Features00:35:03 Conversational AI and Voice Agents00:44:08 Building Voice Agents with ElevenLabs00:51:41 Unexpected Growth Drivers for Voice AI01:00:15 Luke’s AI-First Workflow and Resources01:05:22 Keeping Up with AI & AI Tinkerers Community
Learn how to integrate AI agents directly into your existing applications and unlock new levels of user experience and developer productivity with Atai Barkai, CEO of CopilotKit. Discover how CopilotKit provides the essential infrastructure to bridge advanced AI models with your current software stack, making your applications smarter and more intuitive.What you’ll learn: • How CopilotKit’s CoAgents and AGUI protocol simplify the integration of AI agents into any application, regardless of framework. • The practical benefits of implementing ‘SaaS Copilots’ to reduce learning curves and enhance user interaction in complex software. • Real-world strategies for driving significant internal efficiency gains within large enterprises using AI agents. • Why the ‘human-plus-AI’ mental model is crucial for the foreseeable future of intelligent systems.Atai Barkai is the founder and CEO of CopilotKit, a leading open-source framework for building production-ready AI copilots. With extensive experience in AI agent user experience, Atai shares insights from working with indie developers to Fortune 100 companies, offering a unique perspective on the evolving AI landscape.Key topics covered: • Bridging AI agents with existing application UIs for enhanced functionality. • Understanding the AGUI (Agent User Interaction Protocol) for seamless agent-user communication. • Implementing intent-based interfaces for complex SaaS applications. • Achieving ‘industrial evolution level productivity gains’ with AI co-agents. • The open-source model of CopilotKit and its ease of self-hosting.This episode of AI Tinkerers One-Shot goes under the hood with Atai Barkai to share practical learnings for the community.💡 Resources: • CopilotKit Website - https://copilotkit.ai • Atai Barkai LinkedIn - https://www.linkedin.com/in/atai-barkai • AI Tinkerers - https://aitinkerers.org • One-Shot Podcast - https://aitinkerers.org/podcastSocial Media: @AITinkerers @copilotkit @ataiiam👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!00:00 Introduction00:01:20 Welcome to One-Shot00:01:55 How Joe Met Atai00:03:29 Getting Started with CopilotKit00:03:52 CopilotKit Components: Standard Agent & CoAgents00:04:50 AGUI Protocol & Events Explained00:08:05 CopilotKit UI & Shared State Demo00:11:34 Open Source vs. Cloud Model00:13:18 Integrating CoAgents into Your App00:14:38 Why Bring Agents into Applications?00:15:39 Practical Agent Adoption & Use Cases00:17:42 Learning More About CopilotKit00:18:15 What Atai is Tinkering With00:21:26 Craziest & Most Impactful Use Cases
Learn how to integrate AI agents directly into your existing applications and unlock new levels of user experience and developer productivity with Atai Barkai, CEO of CopilotKit. Discover how CopilotKit provides the essential infrastructure to bridge advanced AI models with your current software stack, making your applications smarter and more intuitive.What you’ll learn: • How CopilotKit’s CoAgents and AGUI protocol simplify the integration of AI agents into any application, regardless of framework. • The practical benefits of implementing ‘SaaS Copilots’ to reduce learning curves and enhance user interaction in complex software. • Real-world strategies for driving significant internal efficiency gains within large enterprises using AI agents. • Why the ‘human-plus-AI’ mental model is crucial for the foreseeable future of intelligent systems.Atai Barkai is the founder and CEO of CopilotKit, a leading open-source framework for building production-ready AI copilots. With extensive experience in AI agent user experience, Atai shares insights from working with indie developers to Fortune 100 companies, offering a unique perspective on the evolving AI landscape.Key topics covered: • Bridging AI agents with existing application UIs for enhanced functionality. • Understanding the AGUI (Agent User Interaction Protocol) for seamless agent-user communication. • Implementing intent-based interfaces for complex SaaS applications. • Achieving ‘industrial evolution level productivity gains’ with AI co-agents. • The open-source model of CopilotKit and its ease of self-hosting.This episode of AI Tinkerers One-Shot goes under the hood with Atai Barkai to share practical learnings for the community.💡 Resources: • CopilotKit Website - https://copilotkit.ai • Atai Barkai LinkedIn - https://www.linkedin.com/in/atai-barkai • AI Tinkerers - https://aitinkerers.org • One-Shot Podcast - https://aitinkerers.org/podcastSocial Media: @AITinkerers @copilotkit @ataiiam👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!00:00 Introduction00:01:20 Welcome to One-Shot00:01:55 How Joe Met Atai00:03:29 Getting Started with CopilotKit00:03:52 CopilotKit Components: Standard Agent & CoAgents00:04:50 AGUI Protocol & Events Explained00:08:05 CopilotKit UI & Shared State Demo00:11:34 Open Source vs. Cloud Model00:13:18 Integrating CoAgents into Your App00:14:38 Why Bring Agents into Applications?00:15:39 Practical Agent Adoption & Use Cases00:17:42 Learning More About CopilotKit00:18:15 What Atai is Tinkering With00:21:26 Craziest & Most Impactful Use Cases
What if you could skip half of your LLM’s computations—and still get the same output?In this episode of One-Shot, we sit down with Tomasz Kolinko, the Warsaw-based founder of Effort Engine—a new AI inference algorithm that dynamically adjusts precision in real time.This isn’t quantization. It’s something weirder—and maybe more useful.Tomasz walks us through how he:- Built a custom algorithm that runs 2–3x faster on MacBooks- Developed a system that can skip 50%+ of model computations dynamically- Created heatmaps to visualize token-level divergence- Benchmarked everything himself… and shared the codeYou’ll also see:- Live demos of inference tuning from 100% to 5%- Why AI models still work (sometimes better!) with just 30% effort- How a DIY hacker space in a car shop led to one of the most creative AI projects in EuropeIf you’re building with LLMs, pushing inference limits, or just obsessed with optimization — this episode will change how you think about AI computation.
In this episode of AI Tinkerers "One-Shot", we go deep with Steve Krenzel, founder of Logic, on how his company turns standard operating procedures (SOPs) into fully functioning APIs. Dive deep with us on schema generation, test cases, structured outputs, and backtesting.We break down:1. Why Steve avoids agentic frameworks2. How Logic automates 100K+ tasks/month for real customers3. The power of structured output for reasoning and reliability4. How prompt caching and append-only templates unlock scale5. His open-source coding agent that builds software from scratch6. How they achieved less than 2% error rates beating human teams7. His famous Prompt Engineering Guide that went viral in 2023If you’re building with LLMs, designing autonomous workflows, or just want to see what the future of developer productivity looks like—this is a must-watch.Relevant Links:Follow Steve: https://www.linkedin.com/in/stevekrenzel/Follow Logic: https://www.linkedin.com/company/with-logic From the episode:- http://github.com/stevekrenzel/pick-ems- http://app.staging.logic.inc/
What if building complex AI agents felt as natural as composing React components—and they could even rewrite their own code? 🤯In this episode of One Shot / AI Tinkerers, host Joe sits down with Evan Boyle, founder of GenSX, to explore a radically new way to design, run, and ship long-running agent workflows:🔑 Key takeaways- React-inspired component model for agents – why JSX-style, type-safe functions beat static graphs for scalability and code reuse.- Traces, telemetry & evals baked-in – see every prompt, variable, and LLM call in real time.- $4 self-modifying coding agent – Evan demos an agent that checks out its own repo, refactors 3 K lines, runs tests, and pushes to GitHub… iteratively.- Real-world production use cases – from million-document legal discovery to inbox-wide entity extraction and analytics.- Durable execution & infra shift – why 5-second latencies and massive parallelism are forcing a rethink of serverless, queues, and caching.- Developer experience first – faster dev loops with component-level caching, cursor rules, and LLM “rubber-duck” debugging tricks.🛠️ Tools & frameworks mentionedGenSX, React/JSX, OpenAI & Anthropic models, Temporal, Pulumi, Cursor, LangChain, LlamaIndex, Crew AI…and more.🔗 Try GenSX → https://www.gensx.com💬 Join the community → https://github.com/gensx-inc/gensx🐦 Follow Evan on X/Twitter → https://x.com/_Evan_Boyle🙌 Enjoyed the conversation?👍 Like, 🔔 subscribe, and drop your questions or aha moments in the comments. It helps more builders discover the pod!📍 Chapters00:00 Intro & Evan’s background04:28 Why existing agent frameworks break at scale12:55 Inside the React-style component model23:10 Live demo: Hacker News Analyzer (1,000 LLM calls in parallel)32:45 Tracing, telemetry, and evals38:20 The self-modifying code agent ($4/iteration)50:40 Real production agent use cases59:05 Dev-tooling tips: caching, logging-only debug loops1:08:30 The future of AI infrastructure & closing thoughts#GenSX #AIAgents #DeveloperExperience #React #SelfModifyingCode #AIWorkflow #OneShotPodcast
While most people are still asking ChatGPT to write code snippets, Kevin Leneway is building full-stack products using nothing but prompts.In this One-Shot episode, he reveals the exact system he’s used to launch over 40 startups at Pioneer Square Labs.We break down:- How he writes BRDs and PRDs that don’t suck- Why vibe coding fails and how to actually use AI agents- The markdown checklist that replaces a product team- How to go from idea to working app with zero context switching- His open-source starter kit that makes Cursor and Claude 3.5 feel like magicIf you’re a builder, this will change how you work.No gimmicks. Just a ruthless focus on speed, clarity, and shipping.Watch now. Learn the system. Steal it.Check out more information like this at Kevin's YouTube: https://www.youtube.com/@kevinleneway2290Also, check out Kevin's Github:http://github.com/kleneway/next-ai-starterNextAI Starter Repo: https://github.com/kleneway/next-ai-starterBRD/PRD/Checklist prompts: https://chatgpt.com/share/67c5ee78-0b94-800d-b467-ceecdbf6ce70Agent Tasklist example: https://gist.github.com/kleneway/07432638aeaf6210316ebbc32dfbe643Storybook Link: https://storybook.js.org/UX Rubric Example: https://github.com/kleneway/pastemax/blob/main/docs/ux-rubric.mdPasteMax Open Source Repo: https://github.com/kleneway/pastemax/























