NeurIPS 2025: A-Mem: Agentic Memory for LLM Agents
Description
The source details the creation and evaluation of Agentic Memory (A-MEM), a novel memory system for Large Language Model (LLM) agents that addresses the fundamental rigidity of existing memory architectures. Traditional systems require predefined data structures and fixed operational workflows, which severely limits their ability to adapt to new information and maintain performance in complex, long-term tasks. A-MEM overcomes this by drawing inspiration from the Zettelkasten method, employing dynamic note construction, autonomous link generation, and memory evolution to create a self-organizing knowledge base. Experimental results on long-term dialogue datasets demonstrate that A-MEM significantly outperforms baseline methods across diverse question categories, particularly in challenging multi-hop reasoning tasks. The system is also shown to be highly efficient and scalable, requiring substantially fewer tokens for operation and maintaining minimal increases in retrieval time as the memory scale grows. These architectural advancements allow LLM agents to maintain meaningful, continuously evolving knowledge structures essential for sophisticated interaction with the environment.
Source:
https://openreview.net/pdf?id=FiM0M8gcct




