LifeBench: A Benchmark for Long-Horizon Multi-Source Memory
📖 Full Retelling
arXiv:2603.03781v1 Announce Type: new
Abstract: Long-term memory is fundamental for personalized agents capable of accumulating knowledge, reasoning over user experiences, and adapting across time. However, existing memory benchmarks primarily target declarative memory, specifically semantic and episodic types, where all information is explicitly presented in dialogues. In contrast, real-world actions are also governed by non-declarative memory, including habitual and procedural types, and need
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Artificial Intelligence arXiv:2603.03781 [Submitted on 4 Mar 2026] Title: LifeBench: A Benchmark for Long-Horizon Multi-Source Memory Authors: Zihao Cheng , Weixin Wang , Yu Zhao , Ziyang Ren , Jiaxuan Chen , Ruiyang Xu , Shuai Huang , Yang Chen , Guowei Li , Mengshi Wang , Yi Xie , Ren Zhu , Zeren Jiang , Keda Lu , Yihong Li , Xiaoliang Wang , Liwei Liu , Cam-Tu Nguyen View a PDF of the paper titled LifeBench: A Benchmark for Long-Horizon Multi-Source Memory, by Zihao Cheng and 17 other authors View PDF HTML Abstract: Long-term memory is fundamental for personalized agents capable of accumulating knowledge, reasoning over user experiences, and adapting across time. However, existing memory benchmarks primarily target declarative memory, specifically semantic and episodic types, where all information is explicitly presented in dialogues. In contrast, real-world actions are also governed by non-declarative memory, including habitual and procedural types, and need to be inferred from diverse digital traces. To bridge this gap, we introduce Lifebench, which features densely connected, long-horizon event simulation. It pushes AI agents beyond simple recall, requiring the integration of declarative and non-declarative memory reasoning across diverse and temporally extended contexts. Building such a benchmark presents two key challenges: ensuring data quality and scalability. We maintain data quality by employing real-world priors, including anonymized social surveys, map APIs, and holiday-integrated calendars, thus enforcing fidelity, diversity and behavioral rationality within the dataset. Towards scalability, we draw inspiration from cognitive science and structure events according to their partonomic hierarchy; enabling efficient parallel generation while maintaining global coherence. Performance results show that top-tier, state-of-the-art memory systems reach just 55.2\% accuracy, highlighting the inherent difficulty of long-horizon retrieval ...
Read full article at source