SP
BravenNow
Oversmoothing, Oversquashing, Heterophily, Long-Range, and more: Demystifying Common Beliefs in Graph Machine Learning
| USA | technology | ✓ Verified - arxiv.org

Oversmoothing, Oversquashing, Heterophily, Long-Range, and more: Demystifying Common Beliefs in Graph Machine Learning

#Graph Neural Networks #Oversmoothing #Oversquashing #Heterophily #Homophily #Long‑Range Tasks #Message Passing #Counterexample #ICLR 2026 #arXiv

📌 Key Takeaways

  • The paper challenges the universally accepted notions of oversmoothing and oversquashing in graph neural networks.
  • It critiques the common homophily–heterophily dichotomy, showing it is sometimes conflated with other phenomena.
  • The authors provide formally sufficient counterexamples that refute blanket statements on long‑range dependencies.
  • They argue that these ambiguities hinder the formulation of precise research questions in the graph ML community.
  • The work aims to promote rigorous thinking and clarity by explicitly articulating the conceptual differences behind these common beliefs.
  • The paper is positioned in the context of a renaissance of message‑passing studies, especially through deeper theoretical and practical scrutiny.
  • The final goal is a more targeted and transparent research agenda for graph machine learning.

📖 Full Retelling

WHO: Adrian Arnaiz‑Rodriguez and Federico Errica, both researchers in machine learning. WHAT: They authored a paper titled *Oversmoothing, Oversquashing, Heterophily, Long‑Range, and more: Demystifying Common Beliefs in Graph Machine Learning*. WHERE: The manuscript was submitted to the arXiv repository and presented at the 2026 International Conference on Learning Representations (ICLR). WHEN: The first version was posted on 21 May 2025, with revisions on 14 Jun 2025 and 19 Feb 2026; the final (v3) was released on 19 Feb 2026. WHY: The authors argue that prevailing universal statements about key challenges in graph neural networks—oversmoothing, oversquashing, homophily‑heterophily, and long‑range tasks—are often misleading or conflated, thereby obscuring research focus and clarity. They present formally sufficient counterexamples to refute these blanket claims and encourage a more precise and critical approach to future investigations. The paper dissects these misconceptions, offering concrete counterexamples and framing a clearer research agenda within the broader field of graph machine learning, which currently experiences rapid methodological innovations and conceptual debates.

🏷️ Themes

Graph Neural Networks, Message-Passing Limitation, Oversmoothing, Oversquashing, Heterophily vs. Homophily, Long‑Range Dependencies, Scientific Rigor in AI, Misconception Critique

Entity Intersection Graph

No entity connections available yet for this article.

Deep Analysis

Why It Matters

The paper clarifies misconceptions about oversmoothing, oversquashing, heterophily, and long‑range effects in graph neural networks, helping researchers avoid false assumptions. By exposing counterexamples, it encourages more precise problem formulation and robust algorithm design.

Context & Background

  • Graph neural networks rely on message passing
  • Common beliefs about oversmoothing and oversquashing have guided research
  • Misunderstandings hinder progress in graph learning

What Happens Next

Future work will likely focus on developing targeted benchmarks that isolate each phenomenon. Researchers may also revisit existing models to test the clarified definitions and improve interpretability.

Frequently Asked Questions

What is oversmoothing?

When node representations become too similar after many message‑passing layers, reducing discriminative power.

How does the paper refute universal statements?

By constructing simple counterexamples that demonstrate the limits of each claim.

Why is this important for practitioners?

It prevents wasted effort on impossible fixes and guides the design of more effective graph models.

Original Source
--> Computer Science > Machine Learning arXiv:2505.15547 [Submitted on 21 May 2025 ( v1 ), last revised 19 Feb 2026 (this version, v3)] Title: Oversmoothing, Oversquashing, Heterophily, Long-Range, and more: Demystifying Common Beliefs in Graph Machine Learning Authors: Adrian Arnaiz-Rodriguez , Federico Errica View a PDF of the paper titled Oversmoothing, Oversquashing, Heterophily, Long-Range, and more: Demystifying Common Beliefs in Graph Machine Learning, by Adrian Arnaiz-Rodriguez and 1 other authors View PDF HTML Abstract: After a renaissance phase in which researchers revisited the message-passing paradigm through the lens of deep learning, the graph machine learning community shifted its attention towards a deeper and practical understanding of message-passing's benefits and limitations. In this paper, we notice how the fast pace of progress around the topics of oversmoothing and oversquashing, the homophily-heterophily dichotomy, and long-range tasks, came with the consolidation of commonly accepted beliefs and assumptions -- under the form of universal statements -- that are not always true nor easy to distinguish from each other. We argue that this has led to ambiguities around the investigated problems, preventing researchers from focusing on and addressing precise research questions while causing a good amount of misunderstandings. Our contribution is to make such common beliefs explicit and encourage critical thinking around these topics, refuting universal statements via simple yet formally sufficient counterexamples. The end goal is to clarify conceptual differences, helping researchers address more clearly defined and targeted problems. Comments: International Conference on Learning Representations (ICLR 2026) Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2505.15547 [cs.LG] (or arXiv:2505.15547v3 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2505.15547 Focus to learn more arXiv-issued DOI via DataCite...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine