DiscoverThe Context Window
The Context Window
Claim Ownership

The Context Window

Author: This Dot Labs

Subscribed: 0Played: 3
Share

Description

Join This Dot Labs' Tracy Lee, A.D. Slaton, and Brandon Mathis for candid conversations about the latest releases and technical advancements in the AI development ecosystem, how real teams are using these tools in production, and what it all means for the future of building software.
13 Episodes
Reverse
Tracy Lee and Brandon Mathis break down the latest wave of AI news and what it means for how we work. They talk through Anthropic’s new Claude Cowork experience, the growing trend of agentic tools that can interact with files and workflows on your computer, Perplexity’s push toward AI-driven operating environments, and the bigger question of whether keyboards and traditional interfaces are starting to feel outdated. They also react to Anthropic’s new usage-limit experiment, discuss trust and security around AI tools that touch your machine, and close with a conversation about Moltbook’s Meta acquisition and what it says about the strange new social layer forming around AI agents.In this episode, you will learn:- AI is increasing the volume of ideas and work rather than eliminating engineering roles.- Claude Cowork represents a new layer where AI can directly interact with files and workflows on your computer.- There’s a clear difference between assistive AI tools and fully autonomous agents when it comes to trust and safety.- Typing and traditional computer interfaces are becoming a bottleneck compared to faster AI interactions.- The AI ecosystem is moving so fast that new tools and trends are emerging almost daily.Tracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/Sponsored by This Dot Labs: https://ai.thisdot.co/
In this episode of The Context Window, Tracy Lee is joined by Brandon Mathis and Ben Lesh to talk about what’s actually happening with MCP now that the negative discourse has cooled off and builders have moved on to shipping. They break down the difference between MCP and MCP apps, why the app layer matters for real data, and interactivity, and how teams can reuse existing web components inside chat experiences instead of rebuilding from scratch.The conversation stays practical: what still feels bleeding edge, where the developer experience is rough, and why security and vetting will be the make or break challenge as app marketplaces scale. Along the way, they compare this new wave of AI app stores to mobile and Slack style ecosystems, talk through how companies might think about distribution and monetization, and why standards are finally reducing the build it twice problem.What You Will Learn:The difference between MCP and MCP apps, and why the app layer changes what is possibleHow MCP apps enable real data access, UI rendering, and interactive workflows inside chatWhere the developer experience still feels early and what limitations teams should expectThe security and vetting challenges AI app marketplaces must solve as adoption growsHow standards like MCP could reduce duplicate work across OpenAI, Anthropic, and future platformsTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/Ben Lesh on Linkedin: https://www.linkedin.com/in/blesh/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/Sponsored by This Dot Labs: https://ai.thisdot.co/
In this episode of The Context Window, the team reacts to OpenAI’s acquisition of OpenClaw and what it signals about where agent tooling is heading and how much responsibility we’re starting to hand to AI inside real workflows.They also talk through the rise of skill marketplaces, when installing shared capabilities genuinely improves productivity and when it introduces security and reliability concerns, along with early impressions of Anthropic’s new Sonnet 4.6 model and how it’s changing everyday coding work.If you’re sorting out which AI tools to adopt, how much autonomy to allow, and where caution still matters, this episode offers practical perspective grounded in real usage.What You’ll Learn:- What the OpenAI + OpenClaw acquisition signals about the future of agent autonomy in developer tools- How skill marketplaces actually work and when installing shared skills becomes risky- Practical ways to decide what an AI agent should and should not be allowed to do- Early real-world impact of Anthropic’s Sonnet 4.6 on coding workflows- How teams can adopt new AI capabilities without breaking reliability or securityTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/Ben Lesh on Linkedin: https://www.linkedin.com/in/blesh/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/Elliott Fouts on Linkedin: https://www.linkedin.com/in/elliott-fouts/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/Sponsored by This Dot Labs: https://ai.thisdot.co/
This week on The Context Window, Tracy Lee, Brandon Mathis, and Ben Lesh unpack the chaos around emerging agent tools like OpenClaw, including naming drama, security risks in downloadable skill marketplaces, and the implications of giving agents access to local files. They also compare Codex, Cursor, and Claude, discussing developer experience, where each tool helps, and why vibe coding breaks down when maintainability and product quality matter.They close by digging into adoption inside organizations, what an AI champion actually is, why leadership needs to model usage instead of delegating it downward, and how teams get considerable value by applying AI to simple, repeatable workflows before trusting it with harder problems.What You Will Learn- Why OpenClaw skill downloads can create real security risks and how agents can leak local secrets- How Codex, Cursor, and Claude differ in real developer workflows, not just benchmarks- Why vibe coded apps often fail in maintainability, UX consistency, and long term value- What an AI champion actually is and why leadership has to drive adoption first- How teams get faster results by applying AI to simple repeatable tasks before complex problemsTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/Ben Lesh on Linkedin: https://www.linkedin.com/in/blesh/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/Sponsored by This Dot Labs: https://ai.thisdot.co/
In this episode of The Context Window, Tracy Lee is joined by Elliott Fouts, Ben Lesh, and Brandon Mathis to break down what is changing right now in AI assisted software development.They start with Cursor subagents and why the real win is fresh context windows that cut hallucinations, avoid compaction drift, and let agents run multi step work with less micromanagement. From there, they connect it to practical workflows like review and test passes, Git worktrees for parallel work, and using multiple models to compare answers for higher quality results.They also tackle whether AI should be treated like a junior developer and why that framing can help set expectations but can also limit what teams try to do. The episode closes with a bigger take on competitive advantage, arguing that companies will win by building internal AI competency and safely connecting models to proprietary workflows and data, with a look at the viral OpenClaw trend and its security tradeoffs.What You Will Learn- How Cursor subagents work and why fresh context windows are the key to reducing hallucinations and compaction drift- Practical ways to orchestrate AI agents for larger tasks without constant micromanagement- When it makes sense to treat AI like a junior developer and when that mindset becomes a bottleneck- How teams are using multiple models together to review work, compare answers, and improve output quality- Why real competitive advantage comes from building internal AI workflows connected to your own tools and data rather than relying on generic chatbotsTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/Ben Lesh on Linkedin: https://www.linkedin.com/in/blesh/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/Elliott Fouts on Linkedin: https://www.linkedin.com/in/elliott-fouts/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/Sponsored by This Dot Labs: ai.thisdot.co
Tracy Lee, Brandon Mathis, and Ben Lesh break down the latest AI hot takes, starting with the leaked Sentry email and the debate over whether $100/day on AI tools is real or even worth worrying about.They dig into why AI hallucinates, how most people trigger it with vague prompts, and why using AI well is quickly becoming a real engineering skill. Along the way they debate juniors vs seniors with AI, why agents often generate code that works but is still messy, and what teams need (reviews, CI, standards) before rolling AI out broadly.They wrap with a standout example from Redis, where a maintainer used AI to replace thousands of lines of C++ with a small, maintainable C implementation, and what that says about the future of language specialist devs.What You Will Learn:- AI tool adoption is now table-stakes for engineers and why getting in now matters more than cost-optimizing too early- How to frame AI spend versus human time and why meetings are often the bigger hidden cost- Why hallucinations happen in practice and how to reduce them with better prompting and verification habits- AI literacy as the new core skill including making the model ask clarifying questions, self-review, and using it as a tutor- How teams and codebases shift with AI including juniors, seniors generating lots of code fast, tech debt still hurting, and structure choices that help or hurt agentsChapters0:00 Intro1:33 Sentry founder email and the $100/day debate4:04 What leaders actually want: stop penny-pinching, build skill6:06 Tool caps and why they slow teams down11:10 Are you late if you are adopting AI tools now16:14 Sponsor break (This Dot Labs)16:40 Hallucinations and how to prompt for better reliability26:54 Do we still need junior engineers in an AI world32:36 Hot takes: AI as the baseline for real dev work40:08 Redis maintainer PR: replacing 3900 lines with 300, AI-assisted48:38 Learning how to learn (and why it matters more now)52:10 Wrap-up + next week teaserTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/Ben Lesh on Linkedin: https://www.linkedin.com/in/blesh/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/Sponsored by This Dot Labs: https://ai.thisdot.co/
In this episode of The Context Window, Tracy Lee, Ben Lesh, and Brandon Mathis dig into what people miss with Cursor and Claude Code, like feeding the tools the right documentation, using multiple agents to review output, and treating the model less like a genius and more like a fast junior developer that needs constant direction and code review. They talk about why tab development still wins in some real world refactor work, why the future problem is not generating code but reviewing it, and what happens when AI can produce more than humans can realistically validate. The conversation also explores building agents inside companies, the emerging SDK race between platforms like Vercel and TanStack, why MCP isn’t dead but has growing pains around context bloat, and what enterprise teams can do when they are stuck with slower setups like AWS Bedrock.What You’ll Learn:- How to stop AI coding tools from producing “confident garbage” by giving them the right context and constraints- Why using Cursor or Claude Code well means supervising an assistant like a fast junior dev and doing real code review again- How to use multiple agents to review the same change so you can spot issues without reading every line manually- When “tab development” beats full agent mode and how to recognize those refactor style use cases- What’s next in agent building inside companies including tool calling workflows, MCP growing pains, and surviving enterprise realities like Bedrock Tracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/Ben Lesh on Linkedin: https://www.linkedin.com/in/blesh/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/Sponsored by This Dot Labs: https://ai.thisdot.co/
Tracy Lee and A.D. Slaton sit down on The Context Window to unpack a wild week in AI, starting with the eye popping 500 billion dollars spent on AI infrastructure in 2025 and why the Cognizant CEO still says enterprise value is missing. They dig into reports of ChatGPT going “code red” in response to Gemini 3, what that means for OpenAI, and what it means for everyday builders trying to ship real products. Along the way they touch on ByteDance, call out LiveKit as a key piece of infrastructure for voice, video, and physical AI agents, and flag IBM’s move to acquire Confluent as another signal of where data and AI are heading.What you will learn- Why 500B spent on AI infrastructure has not translated into clear enterprise value yet- What the Cognizant CEO’s comments really signal for teams building AI products- How Gemini 3’s launch is shaking up the landscape for ChatGPT and OpenAI- What a “Code Red” moment actually means for developers and companies relying on these platforms- How LiveKit powers voice, video, and physical AI agents and where it fits in the stack- Why IBM acquiring Confluent matters for data, streaming, and real time AI systems- How to stay grounded and make practical decisions when AI news makes reality feel unstableChapters0:00 Intro0:53 Are we overspending on AI infrastructure and where’s the enterprise value2:54 Adoption gap, enablement work and why 100% AI generated code is still rare6:11 High touch AI training, workshops and scaling AI practices across teams8:58 Grok 4.22, AI trading experiments and quant style tools for everyone13:51 OpenAI “Code Red,” rising competition and what changes for Agile with agents20:37 ByteDance agentic phone, AR glasses and AI moving into the physical world23:20 LiveKit, voice cloning, AI podcasts and the problem of AI slop27:00 Thinking machines, social media’s role in AI and closing reflectionsTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/A.D. Slaton on Linkedin: https://www.linkedin.com/in/adslaton/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co
In this episode of The Context Window, our panel discusses Anthropic’s possible 2026 IPO and what it signals for the AI ecosystem, the biggest AI and Bedrock announcements coming out of AWS re:Invent, and how Google’s latest Gemini releases actually perform for developers writing and shipping code. Brandon Mathis, Tracy Lee, and A.D. Slaton break down whether Gemini is worth using day to day, how AWS is positioning its AI stack for builders, and if “prompt engineer” is still a real role or just a baseline skill every engineer is expected to have now.What you’ll learn• How Anthropic’s potential 2026 IPO could reshape the AI landscape for startups and incumbents• What AWS is actually doing with Bedrock and its broader AI stack for builders• How Google Gemini’s latest releases stack up for real-world coding and tooling workflows• Whether “prompt engineer” is still a meaningful role or just a baseline skill for all developersTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/A.D. Slaton on Linkedin: https://www.linkedin.com/in/adslaton/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot: https://ai.thisdot.co
In this episode of The Context Window, Tracy, Brandon, and AD tackle the spicy question everyone is asking right now: are MCPs actually dead, or are we just misusing them? They unpack the tension between MCPs and good old CLIs, talk about when you really need a protocol versus when a simple script will do, and why over-abstracting everything into an MCP can wreck your context window instead of helping it.From there, they get into security and observability, using the recent headlines about Claude Code being used in hacking campaigns as a jumping off point to talk about how all powerful tools eventually get used for both good and bad, and why better orchestration visibility is the real missing piece for enterprises. Then the crew reacts to the GPT 5.1 release, compares it with Claude and Gemini, pokes holes in benchmark charts, and shares how they actually choose models in their day to day work.What you will learn:- Why GPT 5.1 matters (and where it still falls short) compared to Claude, Gemini, and Kimi K2- When to use MCPs vs plain old CLIs for agents and why “MCP everything” is a trap- How Claude Code was actually used in a real hacking workflow, and what that reveals about AI safety & observability- Why MCP is more like “USB-C for agents” and GraphQL is “MCP for APIs” and what that means for how we architect AI systems- How different models really feel in practice: GPT vs Claude vs Gemini for coding, research, and creative work- What today’s AI shifts mean for engineers’ jobs, workflows, and the tools we should actually be betting on over the next yearTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/A.D. Slaton on Linkedin: https://www.linkedin.com/in/adslaton/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot: https://ai.thisdot.co
Are we hitting peak Gemini and are open models like Kimi K2 about to challenge GPT 5 and Claude for real developer use?In this episode, Tracy Lee, A D Slaton, and Brandon Mathis talk honestly about which models they actually lean on for daily engineering work, where Gemini falls short with tool calling, and why Claude and GPT 5 still feel more trustworthy. They dig into Kimi K2 from Moonshot Labs, how it stacks up on benchmarks and cost, and what it means for teams that care about both performance and their token bill.They also zoom out to the bigger picture around compute, energy, and why human experience and real world knowledge are becoming the most valuable input to any model.Tracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/A.D. Slaton on Linkedin: https://www.linkedin.com/in/adslaton/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot: https://ai.thisdot.co/
The browser wars are back with AI at the center. Tracy Lee, Brandon Mathis, and A D Slaton break down OpenAI Atlas, Perplexity Comet, Arc and Dia from The Browser Company, and what agentic browsing means for Google, SEO, and engineering workflows. We dig into privacy and tracking with referrer headers and Braze, prompt injection risk, accessibility, and why LLM first interfaces may change front end work as we know it. Plus Cursor 2.0 vs Claude Code, terminal first dev flows, Super Whisper voice coding, MCP, and what is actually working inside companies right now. What You Will Learn- How AI browsers like Atlas, Comet, Arc, and Dia change web discovery and daily workflows- Why Chromium dominance matters but doesn’t guarantee Google wins in an LLM-first world- Agentic browsing fundamentals and how “chat as the address bar” reshapes search behavior- Practical impacts on SEO and accessibility and why semantic structure becomes table stakes- How tracking works in the LLM era referrer headers, auto-unfurls, and where tools like Braze fit- The new AI marketplace model SDKs, extensions, monetization and “AdWords 2.0” dynamics- Real UX tradeoffs from an agent booking flow speed, reliability, and privacy boundaries- Security realities prompt injection, auth considerations, and what to harden first- Voice-to-code workflows using Super Whisper and when hands-free dev actually helps- Agentic coding vs inline copilots Cursor 2.0, Claude Code, and diff-first review habits- Spec-driven prompting write the plan in markdown then “implement this plan” at scale- How engineering roles shift toward orchestration, observability, and feature-level verificationTracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/A.D. Slaton on Linkedin: https://www.linkedin.com/in/adslaton/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co/
Tracy Lee (CEO, This Dot Labs), Brandon Mathis (Engineering Lead), and A.D. Slaton (VP Engineering) debate agentic coding in the CLI vs. the browser, share hands-on workflows (multi-model collab: Claude ↔︎ Gemini ↔︎ “Codex”/OpenAI), and talk practical enablement: how seniors capture tribal knowledge with voice + transcripts, where GitHub Copilot fits, and what it takes to run safe pilots on Bedrock / Vertex / Azure without slowing teams to a crawl.They wrap with MCP (Model Context Protocol) updates and a simple reality check for engineers: stop chasing every tool. Master setup, PR review, and repeatable delivery first.Takeaways- Treat browser tools as the “Apple experience,” CLIs as the “Linux experience.” Use the right club for the shot.- Voice + lightweight capture turns hidden SME knowledge into searchable context for LLMs.- For enterprises: small protected pilots beat months of “governance theater.”- For devs: nail environment setup → ticket reading → PR reviews before fancy agents.Tracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/A.D. Slaton on Linkedin: https://www.linkedin.com/in/adslaton/Brandon Mathis on Linkedin: https://www.linkedin.com/in/mathisbrandon/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co/
Comments 
loading