Superclass-Guided Representation Disentanglement for Spurious Correlation Mitigation
#superclass #representation disentanglement #spurious correlation #machine learning #bias mitigation #robustness #generalization
π Key Takeaways
- The article introduces a method to reduce spurious correlations in machine learning models.
- It uses superclass-guided representation disentanglement to separate relevant and irrelevant features.
- The approach aims to improve model robustness and generalization across different datasets.
- The technique is designed to mitigate biases that arise from unintended correlations in training data.
π Full Retelling
arXiv:2508.08570v2 Announce Type: replace-cross
Abstract: To enhance group robustness to spurious correlations, prior work often relies on auxiliary group annotations and assumes identical sets of groups across training and test domains. To overcome these limitations, we propose to leverage superclasses -- categories that lie higher in the semantic hierarchy than the task's actual labels -- as a more intrinsic signal than group labels for discerning spurious correlations. Our model incorporates
π·οΈ Themes
Machine Learning, Bias Mitigation
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
arXiv:2508.08570v2 Announce Type: replace-cross
Abstract: To enhance group robustness to spurious correlations, prior work often relies on auxiliary group annotations and assumes identical sets of groups across training and test domains. To overcome these limitations, we propose to leverage superclasses -- categories that lie higher in the semantic hierarchy than the task's actual labels -- as a more intrinsic signal than group labels for discerning spurious correlations. Our model incorporates
Read full article at source