Proposes GNNs that correspond exactly to fragments of first‑order logic.
Utilizes finite model theory techniques to relate GNN behavior to logical expressiveness.
Includes modal logics and highly expressive two‑variable logical fragments as addressed GNN types.
Final version spans 21 pages and follows multiple revisions culminating in v4 on 19 February 2026.
Assesses invariance under graph isomorphism and varying input graph sizes, core advantages of GNNs.
📖 Full Retelling
WHO: Bernardo Cuenca Grau, Eva Feng, and Przemysław Andrzej Wałęga.
WHAT: The paper titled "The Correspondence Between Bounded Graph Neural Networks and Fragments of First‑Order Logic".
WHERE: Published on arXiv under the Computer Science > Artificial Intelligence category.
WHEN: First submitted on 12 May 2025 (v1) and last revised on 19 February 2026 (v4).
WHY: The authors aim to design Graph Neural Network (GNN) architectures that precisely mirror prominent fragments of first‑order logic—including modal logics and two‑variable fragments—thereby establishing a rigorous, unified framework for understanding GNN expressiveness through the lens of finite model theory.
🏷️ Themes
Graph Neural Networks, First‑Order Logic, Finite Model Theory, Modal Logic, Logical Expressiveness of Machine Learning Models
Entity Intersection Graph
No entity connections available yet for this article.
Deep Analysis
Why It Matters
This paper links graph neural network expressiveness to fragments of first‑order logic, clarifying what patterns GNNs can learn. It offers a theoretical foundation that can guide the design of more powerful and interpretable graph models.
Expressive power of GNNs is still not fully understood.
First‑order logic fragments provide a formal language for specifying graph properties.
Finite model theory connects logic fragments to algorithmic expressiveness.
The study proposes GNN architectures that match specific logic fragments.
What Happens Next
Researchers may use the correspondence to benchmark GNNs against logical expressiveness limits. It could inspire new architectures that target desired logical properties and help explain failures in graph learning tasks.
Frequently Asked Questions
What is the main contribution of the paper?
It establishes a precise mapping between bounded GNN architectures and fragments of first‑order logic, including modal logics and two‑variable fragments.
How can this mapping be used in practice?
It allows practitioners to choose or design GNNs that are guaranteed to capture specific logical properties needed for a task, improving interpretability and performance.
Does the paper provide implementation details?
The paper focuses on theoretical results; implementation guidance is limited, but the authors discuss potential architectural templates.
What are the future research directions?
Extending the correspondence to deeper GNN layers, other logic fragments, and empirical validation on real graph datasets.
Original Source
--> Computer Science > Artificial Intelligence arXiv:2505.08021 [Submitted on 12 May 2025 ( v1 ), last revised 19 Feb 2026 (this version, v4)] Title: The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic Authors: Bernardo Cuenca Grau , Eva Feng , Przemysław Andrzej Wałęga View a PDF of the paper titled The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic, by Bernardo Cuenca Grau and 2 other authors View PDF Abstract: Graph Neural Networks address two key challenges in applying deep learning to graph-structured data: they handle varying size input graphs and ensure invariance under graph isomorphism. While GNNs have demonstrated broad applicability, understanding their expressive power remains an important question. In this paper, we propose GNN architectures that correspond precisely to prominent fragments of first-order logic , including various modal logics as well as more expressive two-variable fragments. To establish these results, we apply methods from finite model theory of first-order and modal logics to the domain of graph representation learning. Our results provide a unifying framework for understanding the logical expressiveness of GNNs within FO. Comments: 21 pages Subjects: Artificial Intelligence (cs.AI) Cite as: arXiv:2505.08021 [cs.AI] (or arXiv:2505.08021v4 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2505.08021 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Eva Feng [ view email ] [v1] Mon, 12 May 2025 19:45:45 UTC (51 KB) [v2] Sat, 26 Jul 2025 12:21:36 UTC (60 KB) [v3] Mon, 17 Nov 2025 10:29:25 UTC (871 KB) [v4] Thu, 19 Feb 2026 17:38:16 UTC (871 KB) Full-text links: Access Paper: View a PDF of the paper titled The Correspondence Between Bounded Graph Neural Networks and Fragments of First-Order Logic, by Bernardo Cuenca Grau and 2 other authors View PDF TeX Source view license Current browse context: cs.AI < prev | next > new | re...