DiscoverAI Blindspot
AI Blindspot
Claim Ownership

AI Blindspot

Author: Yogendra Miraje

Subscribed: 0Played: 0
Share

Description

AI blindspot is a podcast that explores the uncharted territories of AI by focusing on its cutting-edge research and frontiers 

This podcast is for researchers, developers, curious minds, and anyone fascinated by the quest to close the gap between human intelligence and machines.

As AI is advancing at Godspeed, it has become increasingly difficult to keep up with the progress. This  is a human-in-loop AI-hosted podcast. 

8 Episodes
Reverse
AI Agent Architecture

AI Agent Architecture

2024-11-0413:33

In this episode, we discuss following agent architectures:ReAct (Reason + Act): A method that alternates reasoning and actions, creating a powerful feedback loop for decision-making.Plan and Execute: Breaks down tasks into smaller steps before executing them sequentially, improving reasoning accuracy and efficiency. However, it may face higher latency due to the lack of parallel processing.ReWOO: Separates reasoning from observations, improving efficiency, reducing token consumption, and maki...
AI Agents

AI Agents

2024-10-2914:38

🤖 AI Agents Uncovered! 🤖In our latest episode, we're diving deep into the fascinating world of AI agents, focusing specifically on agents powered by Large Language Models (LLMs). These agents are shaping how AI systems can perceive, decide, and act – bringing us closer to the vision of highly adaptable, intelligent assistants.Key HighlightsAI agents started in philosophy before migrating to computer science and AI. From simple task-specific tools to adaptable LLM-powered agents, their evoluti...
AI Utopia

AI Utopia

2024-10-2017:05

Dario Amodei's essay, "Machines of Loving Grace," envisions the upside of AI if everything goes right. Could we be on the verge of an AI utopia where technology radically improves the world? Let's find out! 🌍✨𝗪𝗵𝘆 𝗱𝗶𝘀𝗰𝘂𝘀𝘀 𝗔𝗜 𝗨𝘁𝗼𝗽𝗶𝗮?While many discussions around AI focus on risks, it's equally important to highlight its positive potential. The goal is to balance the narrative by focusing on best-case scenarios while acknowledging the importance of managing risks. It's about striving for an insp...
💡 𝗡𝗼𝗯𝗲𝗹 𝗣𝗿𝗶𝘇𝗲𝘀 - 𝗔𝗜 𝗛𝘆𝗽𝗲 𝗼𝗿 𝗴𝗹𝗶𝗺𝗽𝘀𝗲 𝗶𝗻𝘁𝗼 𝗦𝗶𝗻𝗴𝘂𝗹𝗮𝗿𝗶𝘁𝘆? 💡One of the biggest moments from this year's Nobel announcements was AI's double win!𝗡𝗼𝗯𝗲𝗹 𝗶𝗻 𝗣𝗵𝘆𝘀𝗶𝗰𝘀Geoffrey Hinton and John Hopfield: Awarded for their pioneering work on neural networks, integrating physics principles like energy-based models and statistical physics into machine learning.𝗡𝗼𝗯𝗲𝗹 𝗶𝗻 𝗖𝗵𝗲𝗺𝗶𝘀𝘁𝗿𝘆John Jumper and Demis Hassabis: Recognized for AlphaFold, which merges AI, biology, and chemistry to predict protein structures, tran...
This episode covers Open AI Dev Day Updates and a 280-page research paper on o1 evaluation model. Realtime API: Build fast speech-to-speech experiences in applications.Vision Fine-Tuning: Fine-tune GPT-4 with images and text to enhance vision capabilities.Prompt Caching: Receive automatic discounts on inputs recently seen by the model.Distillation: Fine-tune cost-efficient models using outputs from larger models.The research paper we discussed is "Evaluation of OpenAI o1: Opportunities and Ch...
Finetuning vs RAG

Finetuning vs RAG

2024-09-3009:05

Large language models (LLMs) excel at various tasks due to their vast training datasets, but their knowledge can be static and lack domain-specific nuance. Researchers have explored methods like fine-tuning and retrieval-augmented generation (RAG) to address these limitations. Fine-tuning involves adjusting a pre-trained model on a narrower dataset to enhance its performance in a specific domain. RAG, on the other hand, expands LLMs' capabilities, especially in knowledge-intensive tasks,...
This episode explores how Google researchers are tackling the issue of "hallucinations" in Large Language Models (LLMs) by connecting them to Data Commons, a vast repository of publicly available statistical data.https://datacommons.org/The researchers experiment with two techniques: Retrieval Interleaved Generation (RIG), where the LLM is trained to generate natural language queries to fetch data from Data Commons and Retrieval Augmented Generation (RAG), where relevant data tables from Data...
This episode discusses some of the most useful prompt engineering techniques and practical tips. Prompt engineering is a technique used to craft natural language prompts that effectively extract knowledge from large language models (LLMs). Unlike traditional models, prompt engineering does not require extensive retraining or fine-tuning. Instead, it leverages the vast embedded knowledge of LLMs, making it easier for researchers to interact with these models using natural language instruc...