Building Semantic Memory for AI With Cognee
Update: 2024-11-25
Description
Summary
In this episode of the AI Engineering Podcast, Vasilije Markovich talks about enhancing Large Language Models (LLMs) with memory to improve their accuracy. He discusses the concept of memory in LLMs, which involves managing context windows to enhance reasoning without the high costs of traditional training methods. He explains the challenges of forgetting in LLMs due to context window limitations and introduces the idea of hierarchical memory, where immediate retrieval and long-term information storage are balanced to improve application performance. Vasilije also shares his work on Cognee, a tool he's developing to manage semantic memory in AI systems, and discusses its potential applications beyond its core use case. He emphasizes the importance of combining cognitive science principles with data engineering to push the boundaries of AI capabilities and shares his vision for the future of AI systems, highlighting the role of personalization and the ongoing development of Cognee to support evolving AI architectures.
Announcements
Parting Question
In this episode of the AI Engineering Podcast, Vasilije Markovich talks about enhancing Large Language Models (LLMs) with memory to improve their accuracy. He discusses the concept of memory in LLMs, which involves managing context windows to enhance reasoning without the high costs of traditional training methods. He explains the challenges of forgetting in LLMs due to context window limitations and introduces the idea of hierarchical memory, where immediate retrieval and long-term information storage are balanced to improve application performance. Vasilije also shares his work on Cognee, a tool he's developing to manage semantic memory in AI systems, and discusses its potential applications beyond its core use case. He emphasizes the importance of combining cognitive science principles with data engineering to push the boundaries of AI capabilities and shares his vision for the future of AI systems, highlighting the role of personalization and the ongoing development of Cognee to support evolving AI architectures.
Announcements
- Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
- Your host is Tobias Macey and today I'm interviewing Vasilije Markovic about adding memory to LLMs to improve their accuracy
- Introduction
- How did you get involved in machine learning?
- Can you describe what "memory" is in the context of LLM systems?
- What are the symptoms of "forgetting" that manifest when interacting with LLMs?
- How do these issues manifest between single-turn vs. multi-turn interactions?
- How does the lack of hierarchical and evolving memory limit the capabilities of LLM systems?
- What are the technical/architectural requirements to add memory to an LLM system/application?
- How does Cognee help to address the shortcomings of current LLM/RAG architectures?
- Can you describe how Cognee is implemented?
- Recognizing that it has only existed for a short time, how have the design and scope of Cognee evolved since you first started working on it?
- What are the data structures that are most useful for managing the memory structures?
- For someone who wants to incorporate Cognee into their LLM architecture, what is involved in integrating it into their applications?
- How does it change the way that you think about the overall requirements for an LLM application?
- For systems that interact with multiple LLMs, how does Cognee manage context across those systems? (e.g. different agents for different use cases)
- There are other systems that are being built to manage user personalization in LLm applications, how do the goals of Cognee relate to those use cases? (e.g. Mem0 - https://github.com/mem0ai/mem0)
- What are the unknowns that you are still navigating with Cognee?
- What are the most interesting, innovative, or unexpected ways that you have seen Cognee used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Cognee?
- When is Cognee the wrong choice?
- What do you have planned for the future of Cognee?
Parting Question
- From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?
- Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers.
- Cognee
- Montenegro
- Catastrophic Forgetting
- Multi-Turn Interaction
- RAG == Retrieval Augmented Generation
- GraphRAG
- Long-term memory
- Short-term memory
- Langchain
- LlamaIndex
- Haystack
- dlt
- Pinecone
- Agentic RAG
- Airflow
- DAG == Directed Acyclic Graph
- FalkorDB
- Neo4J
- Pydantic
- AWS ECS
- AWS SNS
- AWS SQS
- AWS Lambda
- LLM As Judge
- Mem0
- QDrant
- LanceDB
- DuckDB
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
In Channel