SP
BravenNow
A Mathematical Theory of Agency and Intelligence
| USA | technology | ✓ Verified - arxiv.org

A Mathematical Theory of Agency and Intelligence

#bipredictability #agency #intelligence #artificial intelligence #machine learning #feedback systems #information theory #quantum systems

📌 Key Takeaways

  • Researchers developed a mathematical theory distinguishing agency from intelligence in AI
  • Bipredictability (P) measures information sharing between observations, actions, and outcomes
  • Current AI systems achieve agency but not true intelligence according to the new definition
  • A feedback architecture inspired by biological systems enables real-time monitoring of learning effectiveness

📖 Full Retelling

A team of researchers led by Wael Hafez, along with Chenan Wei, Rodrigo Felipe, Amir Nazeri, and Cameron Reid, published a groundbreaking mathematical theory distinguishing agency from intelligence in AI systems on February 26, 2026, addressing fundamental limitations in how artificial intelligence systems interact with their environments. The research introduces the concept of 'bipredictability' (P), a mathematical measure that quantifies how effectively information is shared between observations, actions, and outcomes in complex systems. The study reveals that while current AI systems can process vast amounts of data to produce sophisticated predictions, they often lack feedback on how effectively they utilize resources, leading to potential degradation in environmental interaction over time. The researchers prove that bipredictability has intrinsic bounds: reaching unity in quantum systems, being equal to or smaller than 0.5 in classical systems, and decreasing further when agency (action selection) is introduced. These theoretical findings were confirmed through experiments with physical systems (double pendulum), reinforcement learning agents, and multi-turn conversations with large language models. The work distinguishes between agency—the capacity to act on predictions—and true intelligence, which additionally requires learning from interaction, self-monitoring of learning effectiveness, and adaptation of observations and actions to maintain effective learning. Inspired by thalamocortical regulation in biological systems, the team demonstrates a feedback architecture that monitors bipredictability in real-time, establishing a foundation for more adaptive and resilient artificial intelligence systems.

🏷️ Themes

Artificial Intelligence, Mathematical Theory, Agency Intelligence

Entity Intersection Graph

No entity connections available yet for this article.

Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.22519 [Submitted on 26 Feb 2026] Title: A Mathematical Theory of Agency and Intelligence Authors: Wael Hafez , Chenan Wei , Rodrigo Felipe , Amir Nazeri , Cameron Reid View a PDF of the paper titled A Mathematical Theory of Agency and Intelligence, by Wael Hafez and 4 other authors View PDF Abstract: To operate reliably under changing conditions, complex systems require feedback on how effectively they use resources, not just whether objectives are met. Current AI systems process vast information to produce sophisticated predictions, yet predictions can appear successful while the underlying interaction with the environment degrades. What is missing is a principled measure of how much of the total information a system deploys is actually shared between its observations, actions, and outcomes. We prove this shared fraction, which we term bipredictability, P, is intrinsic to any interaction, derivable from first principles, and strictly bounded: P can reach unity in quantum systems, P equal to, or smaller than 0.5 in classical systems, and lower once agency (action selection) is introduced. We confirm these bounds in a physical system (double pendulum), reinforcement learning agents, and multi turn LLM conversations. These results distinguish agency from intelligence: agency is the capacity to act on predictions, whereas intelligence additionally requires learning from interaction, self-monitoring of its learning effectiveness, and adapting the scope of observations, actions, and outcomes to restore effective learning. By this definition, current AI systems achieve agency but not intelligence. Inspired by thalamocortical regulation in biological systems, we demonstrate a feedback architecture that monitors P in real time, establishing a prerequisite for adaptive, resilient AI. Comments: 20 pages, 4 figuers Subjects: Artificial Intelligence (cs.AI) ; Information Theory (cs.IT) Cite as: arXiv:2602.22519 [cs.AI] (...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine