Axiomatic On-Manifold Shapley via Optimal Generative Flows
#Shapley values #manifold learning #generative flows #feature attribution #interpretability #axiomatic approach #optimal transport
📌 Key Takeaways
- The article introduces a new method for computing Shapley values on manifolds using optimal generative flows.
- It proposes an axiomatic approach to ensure theoretical consistency in feature attribution for complex data structures.
- The method leverages generative models to handle high-dimensional and non-linear data distributions effectively.
- This approach aims to improve interpretability and fairness in machine learning models by providing more accurate explanations.
📖 Full Retelling
arXiv:2603.05093v1 Announce Type: cross
Abstract: Shapley-based attribution is critical for post-hoc XAI but suffers from off-manifold artifacts due to heuristic baselines. While generative methods attempt to address this, they often introduce geometric inefficiency and discretization drift. We propose a formal theory of on-manifold Aumann-Shapley attributions driven by optimal generative flows. We prove a representation theorem establishing the gradient line integral as the unique functional s
🏷️ Themes
Machine Learning, Interpretability
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Machine Learning arXiv:2603.05093 [Submitted on 5 Mar 2026] Title: Axiomatic On-Manifold Shapley via Optimal Generative Flows Authors: Cenwei Zhang , Lin Zhu , Manxi Lin , Lei You View a PDF of the paper titled Axiomatic On-Manifold Shapley via Optimal Generative Flows, by Cenwei Zhang and 3 other authors View PDF HTML Abstract: Shapley-based attribution is critical for post-hoc XAI but suffers from off-manifold artifacts due to heuristic baselines. While generative methods attempt to address this, they often introduce geometric inefficiency and discretization drift. We propose a formal theory of on-manifold Aumann-Shapley attributions driven by optimal generative flows. We prove a representation theorem establishing the gradient line integral as the unique functional satisfying efficiency and geometric axioms, notably reparameterization invariance. To resolve path ambiguity, we select the kinetic-energy-minimizing Wasserstein-2 geodesic transporting a prior to the data distribution. This yields a canonical attribution family that recovers classical Shapley for additive models and admits provable stability bounds against flow approximation errors. By reframing baseline selection as a variational problem, our method experimentally outperforms baselines, achieving strict manifold adherence via vanishing Flow Consistency Error and superior semantic alignment characterized by Structure-Aware Total Variation. Our code is on this https URL . Comments: 11 figures, 22 pages Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2603.05093 [cs.LG] (or arXiv:2603.05093v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.05093 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Cenwei Zhang [ view email ] [v1] Thu, 5 Mar 2026 12:05:20 UTC (4,061 KB) Full-text links: Access Paper: View a PDF of the paper titled Axiomatic ...
Read full article at source