Discover
AI Blindspot
AI Blindspot
Author: Yogendra Miraje
Subscribed: 0Played: 0Subscribe
Share
© Yogendra Miraje
Description
AI blindspot is a podcast that explores the uncharted territories of AI by focusing on its cutting-edge research and frontiers
This podcast is for researchers, developers, curious minds, and anyone fascinated by the quest to close the gap between human intelligence and machines.
As AI is advancing at Godspeed, it has become increasingly difficult to keep up with the progress. This is a human-in-loop AI-hosted podcast.
15 Episodes
Reverse
This episode covers AIE World's Fair Recap of Day 2 focusing on Keynotes & SWE Agents. 🧠 Key Takeaways:Moore’s Law for AI Agents: Capability is doubling every 70 days—yes, you read that right.Specifications = “New Code”: Aligning human intentions/values directly with model behavior—beyond old-school code artifacts.Evals: Absolutely critical for shipping AI, enabling rapid experimentation and tight feedback loops.Dagger “Container Use”: Secure, customizable, and multiplayer-ready agent environments.Thinking in Gemini: Models now iteratively “think” for smarter, dynamic responses with variable compute.Google Jules: Async coding agent supporting multitasking and parallel experimentation.GitHub Copilot Agent Mode: Autonomous searching, task execution, and self-healing for dev workflows.Brain Trust Loop Agent: Automated prompt, dataset, and scorer optimization—total eval game-changer.
This episode covers the AI Engineer World's Fair 2025, the largest and most impactful edition yet. With over 3,000 attendees and 250+ speakers from around the globe, the event brought together leading voices in AI to explore the future of agentic workflows, model development, and human-AI collaboration.https://www.ai.engineer/https://www.youtube.com/watch?v=z4zXicOAF28&t=917s&ab_channel=AIEngineerThe AI Engineer World's Fair 2025 made it clear: AI agents are fast becoming the core of digital interactions. From extending human capabilities to operating across tools and platforms, agents are shifting from helpful assistants to true teammates in workflows. Their rise is also reshaping software development—driving a move toward peer programming, domain-specific applications, and execution-focused innovation. The success of these systems now hinges less on novel ideas and more on delivering fast, thoughtful, and user-centric experiences.A major theme was the growing dominance of the Model Context Protocol (MCP), which is quickly becoming the backbone of agentic systems. MCP solves the long-standing issue of "copy and paste hell" by allowing AI to interact directly with applications like Slack or error logs. Its design emphasizes simplicity for server developers while enabling rich, context-aware experiences through more complex clients. As enterprises adopt agents at scale, MCP is emerging as the foundation for handling credentials, authentication, observability, and integration with internal services.As AI adoption deepens, local models have made impressive progress, offering low-latency and high-control environments for developers. At the same time, the cost of large models has plummeted—dropping from $30 to $2 per million tokens—making advanced AI more accessible than ever. This affordability, coupled with the rise of centralized infrastructure and MCP gateways, is fueling the creation of scalable, enterprise-grade systems. AI engineering is rapidly maturing, shifting from demos to production-level deployments that require strong observability and robust design choices.The overall message was clear: effective AI products are driven by data flywheels—continuous loops of deployment, user feedback, and improvement. Value is no longer measured by how sophisticated the models are, but by the ratio of human effort to useful output. Agent-based ecosystems are already forming their own economies, where agents can autonomously discover, interact with, and even pay for services. And while the technology evolves, the most successful builders will be those who stay focused on clarity, context, and execution.
Aentic workflows are processes where AI agents dynamically plan, execute, and reflect on steps to achieve a goal, differentiating them from static, predefined workflows. Augmented LLMs, which serve as a base building block, are enhanced with capabilities like tool use and memory, enabling the creation of these more complex agents. This episode also distinguish between an agentic workflow, the sequence of steps, and the agentic architecture, the underlying system allowing multiple workflows to run securely and effectively at scale, highlighting the benefits and challenges of implementing such systems.Sources:https://www.anthropic.com/engineering/building-effective-agentshttps://weaviate.io/blog/what-are-agentic-workflows
In this episode, we discuss strategies for building effective AI agents, emphasizing simplicity and composable patterns over complex frameworks. It distinguishes between workflows, which use predefined code paths, and agents, where LLMs dynamically direct their own processes, noting that simpler solutions are often sufficient. To build effective AI Agents, start simple and composable building blocks, designed tools carefully and leverage more complex agentic patterns only when simple solutions are insufficient for the task's needs.https://www.anthropic.com/engineering/building-effective-agents
DeepSeek-V3, is a open-weights large language model. DeepSeek-V3's key features include its remarkably low development cost, achieved through innovative techniques like inference-time computing and an auxiliary-loss-free load balancing strategy. The model's architecture utilizes Mixture-of-Experts (MoE) and Multi-head Latent Attention (MLA) for efficiency. Extensive testing on various benchmarks demonstrates strong performance comparable to, and in some cases exceeding, leading closed-source models. Finally, the text provides recommendations for future AI hardware design based on the DeepSeek-V3 development process.https://arxiv.org/pdf/2412.19437v1
In today's episode, we are discussing two research papers describing the two distinct approaches to building multi-agent collaboration :MetaGPT is a meta-programming framework using SOPs and defined roles for software development. https://arxiv.org/pdf/2308.00352AutoGen uses customizable, conversable agents interacting via natural language or code to build applications. https://arxiv.org/pdf/2308.08155
This episode discusses agentic design pattern Tool Use.Tool use is essential for enhancing the capabilities of LLMs and allowing them to interact effectively with the real world.We discuss following papers.Gorilla: Large Language Model Connected withMassive APIshttps://arxiv.org/pdf/2305.15334MM-REACT : Prompting ChatGPT for Multimodal Reasoning and Actionhttps://arxiv.org/pdf/2303.11381
This episode discussed AI agentic design pattern "Reflection"📝 𝗦𝗘𝗟𝗙-𝗥𝗘𝗙𝗜𝗡𝗘 SELF-REFINE is an approach where the LLM generates an initial output, then iteratively reviews and refines it, providing feedback on its own work until the output reaches a desired quality. This self-loop allows the LLM to act as both the creator and critic, enhancing its output step by step.🔍 𝗖𝗥𝗜𝗧𝗜𝗖 CRITIC leverages external tools—like search engines and code interpreters—to fact-check and refine LLM-generated outputs...
In this episode, we discuss following agent architectures:ReAct (Reason + Act): A method that alternates reasoning and actions, creating a powerful feedback loop for decision-making.Plan and Execute: Breaks down tasks into smaller steps before executing them sequentially, improving reasoning accuracy and efficiency. However, it may face higher latency due to the lack of parallel processing.ReWOO: Separates reasoning from observations, improving efficiency, reducing token consumption, and maki...
🤖 AI Agents Uncovered! 🤖In our latest episode, we're diving deep into the fascinating world of AI agents, focusing specifically on agents powered by Large Language Models (LLMs). These agents are shaping how AI systems can perceive, decide, and act – bringing us closer to the vision of highly adaptable, intelligent assistants.Key HighlightsAI agents started in philosophy before migrating to computer science and AI. From simple task-specific tools to adaptable LLM-powered agents, their evoluti...
Dario Amodei's essay, "Machines of Loving Grace," envisions the upside of AI if everything goes right. Could we be on the verge of an AI utopia where technology radically improves the world? Let's find out! 🌍✨𝗪𝗵𝘆 𝗱𝗶𝘀𝗰𝘂𝘀𝘀 𝗔𝗜 𝗨𝘁𝗼𝗽𝗶𝗮?While many discussions around AI focus on risks, it's equally important to highlight its positive potential. The goal is to balance the narrative by focusing on best-case scenarios while acknowledging the importance of managing risks. It's about striving for an insp...
💡 𝗡𝗼𝗯𝗲𝗹 𝗣𝗿𝗶𝘇𝗲𝘀 - 𝗔𝗜 𝗛𝘆𝗽𝗲 𝗼𝗿 𝗴𝗹𝗶𝗺𝗽𝘀𝗲 𝗶𝗻𝘁𝗼 𝗦𝗶𝗻𝗴𝘂𝗹𝗮𝗿𝗶𝘁𝘆? 💡One of the biggest moments from this year's Nobel announcements was AI's double win!𝗡𝗼𝗯𝗲𝗹 𝗶𝗻 𝗣𝗵𝘆𝘀𝗶𝗰𝘀Geoffrey Hinton and John Hopfield: Awarded for their pioneering work on neural networks, integrating physics principles like energy-based models and statistical physics into machine learning.𝗡𝗼𝗯𝗲𝗹 𝗶𝗻 𝗖𝗵𝗲𝗺𝗶𝘀𝘁𝗿𝘆John Jumper and Demis Hassabis: Recognized for AlphaFold, which merges AI, biology, and chemistry to predict protein structures, tran...
This episode covers Open AI Dev Day Updates and a 280-page research paper on o1 evaluation model. Realtime API: Build fast speech-to-speech experiences in applications.Vision Fine-Tuning: Fine-tune GPT-4 with images and text to enhance vision capabilities.Prompt Caching: Receive automatic discounts on inputs recently seen by the model.Distillation: Fine-tune cost-efficient models using outputs from larger models.The research paper we discussed is "Evaluation of OpenAI o1: Opportunities and Ch...
Large language models (LLMs) excel at various tasks due to their vast training datasets, but their knowledge can be static and lack domain-specific nuance. Researchers have explored methods like fine-tuning and retrieval-augmented generation (RAG) to address these limitations. Fine-tuning involves adjusting a pre-trained model on a narrower dataset to enhance its performance in a specific domain. RAG, on the other hand, expands LLMs' capabilities, especially in knowledge-intensive tasks,...
This episode explores how Google researchers are tackling the issue of "hallucinations" in Large Language Models (LLMs) by connecting them to Data Commons, a vast repository of publicly available statistical data.https://datacommons.org/The researchers experiment with two techniques: Retrieval Interleaved Generation (RIG), where the LLM is trained to generate natural language queries to fetch data from Data Commons and Retrieval Augmented Generation (RAG), where relevant data tables from Data...




