SP
BravenNow
Continual learning and refinement of causal models through dynamic predicate invention
| USA | technology | ✓ Verified - arxiv.org

Continual learning and refinement of causal models through dynamic predicate invention

#causal models #symbolic reasoning #predicate invention #Meta‑Interpretive Learning #continual learning #PPO baseline #sample efficiency #relational dynamics #world modeling #reinforcement learning #AI research #arXiv 2026

📌 Key Takeaways

  • Online symbolic construction of causal world models.
  • Continuous learning and repair integrated into the agent’s decision loop.
  • Application of Meta‑Interpretive Learning with dynamic predicate invention for reusable abstractions.
  • Generation of a hierarchical, disentangled set of high‑quality concepts.
  • Scalability to domains with complex relational dynamics where propositional approaches suffer combinatorial explosion.
  • Achieves sample‑efficiency orders of magnitude higher than the PPO neural‑network baseline.

📖 Full Retelling

WHO: Enrique Crespo‑Fernandez, Oliver Ray, Telmo de Menezes e Silva Filho, and Peter Flach. WHAT: They propose a framework that builds symbolic causal world models online by integrating continuous learning and repair, using Meta‑Interpretive Learning and dynamic predicate invention to create reusable abstractions and a hierarchy of disentangled concepts. WHERE: The work was published on the arXiv AI preprint server under cs.AI, identified as arXiv:2602.17217. WHEN: It was submitted to arXiv on 19 February 2026. WHY: To address the sample inefficiency, lack of transparency, and poor scalability of conventional world‑modeling methods, enabling agents to internalize the logic of complex environments more efficiently.

🏷️ Themes

Artificial Intelligence, Causal Modeling, Symbolic Reasoning, Continual Learning, Dynamic Predicate Invention, Meta‑Interpretive Learning, Relational Learning, Sample Efficiency, Reinforcement Learning, Agent Decision Making

Entity Intersection Graph

No entity connections available yet for this article.

Deep Analysis

Why It Matters

This research offers a new way to build causal world models online, improving transparency and scalability while drastically reducing data needs. It could enable smarter agents that learn more efficiently in complex environments.

Context & Background

  • Continual learning seeks to adapt models over time
  • Symbolic causal models provide interpretable reasoning
  • Meta-Interpretive Learning enables rule induction
  • Predicate invention creates reusable abstractions
  • Sample efficiency is a key challenge for neural methods

What Happens Next

Future work will explore deploying the framework in robotics and autonomous systems, integrating it with existing reinforcement learning pipelines, and expanding the approach to larger relational domains.

Frequently Asked Questions

What is dynamic predicate invention?

It is a technique that automatically generates new predicates to capture relationships in data, enabling more expressive symbolic models

How does this method improve sample efficiency?

By building high-level abstractions, the agent needs fewer observations to learn causal dynamics compared to flat neural networks

Will the code be publicly available?

The authors plan to release the implementation on a public repository after the paper is published

How does it compare to PPO?

The approach achieves orders of magnitude higher sample efficiency while maintaining comparable or better performance in complex relational tasks

Original Source
--> Computer Science > Artificial Intelligence arXiv:2602.17217 [Submitted on 19 Feb 2026] Title: Continual learning and refinement of causal models through dynamic predicate invention Authors: Enrique Crespo-Fernandez , Oliver Ray , Telmo de Menezes e Silva Filho , Peter Flach View a PDF of the paper titled Continual learning and refinement of causal models through dynamic predicate invention, by Enrique Crespo-Fernandez and 3 other authors View PDF HTML Abstract: Efficiently navigating complex environments requires agents to internalize the underlying logic of their world, yet standard world modelling methods often struggle with sample inefficiency, lack of transparency, and poor scalability. We propose a framework for constructing symbolic causal world models entirely online by integrating continuous model learning and repair into the agent's decision loop, by leveraging the power of Meta-Interpretive Learning and predicate invention to find semantically meaningful and reusable abstractions, allowing an agent to construct a hierarchy of disentangled, high-quality concepts from its observations. We demonstrate that our lifted inference approach scales to domains with complex relational dynamics, where propositional methods suffer from combinatorial explosion, while achieving sample-efficiency orders of magnitude higher than the established PPO neural-network-based baseline. Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2602.17217 [cs.AI] (or arXiv:2602.17217v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2602.17217 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Enrique Crespo Fernandez [ view email ] [v1] Thu, 19 Feb 2026 10:08:31 UTC (989 KB) Full-text links: Access Paper: View a PDF of the paper titled Continual learning and refinement of causal models through dynamic predicate invention, by Enrique Crespo-Fernandez and 3 other authors View PDF HTML TeX Source view license Current browse c...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine