Bounded State in an Infinite Horizon: Proactive Hierarchical Memory for Ad-Hoc Recall over Streaming Dialogues
#hierarchical memory #streaming dialogues #ad-hoc recall #bounded state #conversation AI #information retrieval #proactive systems
📌 Key Takeaways
- Researchers propose a proactive hierarchical memory system for streaming dialogues
- The system enables ad-hoc recall of past conversation details
- It addresses challenges of infinite data streams with bounded memory
- The approach uses hierarchical organization to improve information retrieval
📖 Full Retelling
arXiv:2603.04885v1 Announce Type: new
Abstract: Real-world dialogue usually unfolds as an infinite stream. It thus requires bounded-state memory mechanisms to operate within an infinite horizon. However, existing read-then-think memory is fundamentally misaligned with this setting, as it cannot support ad-hoc memory recall while streams unfold. To explore this challenge, we introduce \textbf{STEM-Bench}, the first benchmark for \textbf{ST}reaming \textbf{E}valuation of \textbf{M}emory. It compr
🏷️ Themes
AI Memory, Conversation Systems
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Artificial Intelligence arXiv:2603.04885 [Submitted on 5 Mar 2026] Title: Bounded State in an Infinite Horizon: Proactive Hierarchical Memory for Ad-Hoc Recall over Streaming Dialogues Authors: Bingbing Wang , Jing Li , Ruifeng Xu View a PDF of the paper titled Bounded State in an Infinite Horizon: Proactive Hierarchical Memory for Ad-Hoc Recall over Streaming Dialogues, by Bingbing Wang and 2 other authors View PDF HTML Abstract: Real-world dialogue usually unfolds as an infinite stream. It thus requires bounded-state memory mechanisms to operate within an infinite horizon. However, existing read-then-think memory is fundamentally misaligned with this setting, as it cannot support ad-hoc memory recall while streams unfold. To explore this challenge, we introduce \textbf{STEM-Bench}, the first benchmark for \textbf reaming \textbf valuation of \textbf emory. It comprises over 14K QA pairs in dialogue streams that assess perception fidelity, temporal reasoning, and global awareness under infinite-horizon constraints. The preliminary analysis on STEM-Bench indicates a critical \textit{fidelity-efficiency dilemma}: retrieval-based methods use fragment context, while full-context models incur unbounded latency. To resolve this, we propose \textbf , a proactive hierarchical memory framework for streaming dialogues. It enables ad-hoc memory recall on demand by reasoning over continuous streams with multi-granular distillation. Moreover, it employs Adaptive Spatiotemporal Optimization to dynamically optimize retention based on expected utility. It enables a bounded knowledge state for lower inference latency without sacrificing reasoning fidelity. Experiments show that ProStream outperforms baselines in both accuracy and efficiency. Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2603.04885 [cs.AI] (or arXiv:2603.04885v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2603.04885 Focus to learn more arXiv-issued DOI via DataCite (pen...
Read full article at source