SP
BravenNow
Poisoning the Inner Prediction Logic of Graph Neural Networks for Clean-Label Backdoor Attacks
| USA | technology | ✓ Verified - arxiv.org

Poisoning the Inner Prediction Logic of Graph Neural Networks for Clean-Label Backdoor Attacks

#Graph Neural Networks #backdoor attack #clean-label #poisoning #adversarial machine learning #model security #GNN vulnerabilities

📌 Key Takeaways

  • Researchers propose a novel clean-label backdoor attack method targeting Graph Neural Networks (GNNs).
  • The attack manipulates the inner prediction logic of GNNs rather than relying on visible data poisoning.
  • It enables adversaries to compromise model integrity without altering data labels, making detection difficult.
  • The method highlights vulnerabilities in GNNs to stealthy attacks that can cause misclassification during inference.

📖 Full Retelling

arXiv:2603.05004v1 Announce Type: cross Abstract: Graph Neural Networks (GNNs) have achieved remarkable results in various tasks. Recent studies reveal that graph backdoor attacks can poison the GNN model to predict test nodes with triggers attached as the target class. However, apart from injecting triggers to training nodes, these graph backdoor attacks generally require altering the labels of trigger-attached training nodes into the target class, which is impractical in real-world scenarios.

🏷️ Themes

Cybersecurity, Machine Learning

📚 Related People & Topics

Graph neural network

Class of artificial neural networks

Graph neural networks (GNN) are specialized artificial neural networks that are designed for tasks whose inputs are graphs. One prominent example is molecular drug design. Each input sample is a graph representation of a molecule, where atoms form the nodes and chemical bonds between atoms form the...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Graph neural network:

🌐 Artificial intelligence 2 shared
🌐 GNN 1 shared
🌐 Mixture of experts 1 shared
🌐 Development of the nervous system in humans 1 shared
🌐 LUMINA 1 shared
View full profile

Mentioned Entities

Graph neural network

Class of artificial neural networks

}
Original Source
--> Computer Science > Machine Learning arXiv:2603.05004 [Submitted on 5 Mar 2026] Title: Poisoning the Inner Prediction Logic of Graph Neural Networks for Clean-Label Backdoor Attacks Authors: Yuxiang Zhang , Bin Ma , Enyan Dai View a PDF of the paper titled Poisoning the Inner Prediction Logic of Graph Neural Networks for Clean-Label Backdoor Attacks, by Yuxiang Zhang and 2 other authors View PDF HTML Abstract: Graph Neural Networks have achieved remarkable results in various tasks. Recent studies reveal that graph backdoor attacks can poison the GNN model to predict test nodes with triggers attached as the target class. However, apart from injecting triggers to training nodes, these graph backdoor attacks generally require altering the labels of trigger-attached training nodes into the target class, which is impractical in real-world scenarios. In this work, we focus on the clean-label graph backdoor attack, a realistic but understudied topic where training labels are not modifiable. According to our preliminary analysis, existing graph backdoor attacks generally fail under the clean-label setting. Our further analysis identifies that the core failure of existing methods lies in their inability to poison the prediction logic of GNN models, leading to the triggers being deemed unimportant for prediction. Therefore, we study a novel problem of effective clean-label graph backdoor attacks by poisoning the inner prediction logic of GNN models. We propose BA-Logic to solve the problem by coordinating a poisoned node selector and a logic-poisoning trigger generator. Extensive experiments on real-world datasets demonstrate that our method effectively enhances the attack success rate and surpasses state-of-the-art graph backdoor attack competitors under clean-label settings. Our code is available at this https URL Comments: Submit to KDD 2026 Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2603.05004 [cs.LG] (or arXiv:2603.05004v1 [cs....
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine