E80: Build Better AI Agents: Context Engineering Over Prompts (Pt. 1)
Description
Your AI agents work—but they’re not smart. They follow instructions, yet fail on edge cases, forget context mid-task, and need constant supervision.
Malcolm Werchota reveals why your invoice automation, podcast metadata generation, and business workflows keep breaking down—and why it’s not the AI’s fault. The missing piece is context engineering, a concept most people have never heard of.
In this two-part series, Malcolm breaks down Anthropic’s groundbreaking research on how to build AI agents that actually think ahead. Learn why prompt engineering is no longer enough, how attention budget silently kills your automations, what context rot does to long-running tasks, and how the orchestrator pattern allows AI agents to spawn helper agents on demand.
You’ll hear how Malcolm’s team cut invoice processing time from 45 minutes to zero human intervention—and why feeding your AI more data can actually make it dumber. This episode isn’t about magic prompts. It’s about designing the entire environment your AI operates in.
Key topics: AI agent automation challenges, context window vs. attention budget, why mega-prompts fail, orchestrator pattern design, system prompt architecture, tool-calling strategies, and scalable AI workflows.
Perfect for professionals implementing Claude AI, automating business processes, or frustrated with unreliable AI agents. Malcolm’s “Ship First, Study Later” approach means real implementation—not theory.
Part 2 dives into advanced system prompts, minimal tool sets, and managing long-running tasks without context explosion.
WHAT YOU’LL LEARN
- Why functional AI agents still fail at business automation
- The difference between prompt vs. context engineering
- How attention budget and context rot sabotage your workflows
- The orchestrator pattern: when agents build their own helpers
- Real-world cases: invoices, podcasts, and process automation
- Why mega-prompts make AI dumber—and what to do instead
- Anthropic’s context engineering framework
- How to design information architecture for Claude and other LLMs
TOOLS & PLATFORMS
- Claude Code (Anthropic)
- Claude Sonnet 4.5 (1M token window)
- Gemini 2.5 (1M token window)
- ChatGPT (100–200k token window)
- 10 Valley OS (TenVOS) – context engineering case study
RESOURCES
- Anthropic Research: Effective Context Engineering for AI Agents
- Previous Episode: Building Claude Code Agents
- Previous Episode: 10 Valley OS – Context Engineering in Action
MALCOLM’S KEY INSIGHTS
“It’s like having an employee who follows orders perfectly—but never takes initiative or thinks ahead.”
“Context engineering manages everything the model uses: system instructions, tools, message history—not just the prompt.”
“The challenge now isn’t crafting perfect prompts. It’s curating the information within the model’s limited attention budget.”
“Don’t feed it a billion files. Use the smallest, clearest, highest-signal inputs possible.”
COMING IN PART 2
- Advanced system prompt structure
- Minimal tool sets for reliability
- Handling long-running tasks without context explosion
- Practical implementation blueprints
🔗 WHERE TO FIND MALCOLM WERCHOTA
LinkedIn → linkedin.com/in/malcolmwerchota
Website → werchota.ai
YouTube → youtube.com/@werchota
Facebook → AI Cookbook by Malcolm Werchota
Instagram → @malcolmwerchotaai
TikTok → @malcolmwerchota
📧 Get in touch:
Questions, feedback, or transformation stories → malcolm@werchota.ai
Episode ideas → social@werchota.ai
🎓 Upgrade your AI skills:
Check out the AI Fit Academy, Malcolm’s hands-on program that gets professionals shipping working AI workflows by Week 2—or your money back.
Learn more → werchota.ai/ai-fit-academy