SubFLOT: Submodel Extraction for Efficient and Personalized Federated Learning via Optimal Transport
#Federated Learning #Optimal Transport #Model Personalization #Network Pruning #Submodel Extraction #System Heterogeneity #arXiv #SubFLOT
📌 Key Takeaways
- Researchers introduced SubFLOT, a new framework for efficient and personalized Federated Learning.
- It uses Optimal Transport theory to extract client-specific submodels from a global model.
- The method solves the trade-off between server-side pruning (impersonal) and client-side pruning (computationally heavy).
- It addresses both system heterogeneity (device resources) and statistical heterogeneity (non-IID data).
- The goal is to enable practical FL deployment on resource-constrained devices like phones and IoT sensors.
📖 Full Retelling
A team of researchers has proposed a novel framework called SubFLOT (Submodel Extraction for Efficient and Personalized Federated Learning via Optimal Transport) to address core challenges in Federated Learning (FL), as detailed in a recent paper published on the arXiv preprint server under identifier arXiv:2604.06631v1. This work, announced as a cross-disciplinary contribution, aims to resolve the efficiency-personalization trade-off that currently hinders the practical deployment of FL systems across heterogeneous devices and data distributions.
The central innovation of SubFLOT lies in its use of Optimal Transport theory to intelligently extract personalized submodels from a global model for each client. Unlike traditional federated pruning methods, which either perform pruning on the server side—resulting in generic, non-personalized models—or burden resource-constrained client devices with heavy on-device pruning computations, SubFLOT strategically distributes the workload. The server utilizes client-specific information and Optimal Transport to identify and extract the most relevant subnetwork for each client's local data, thereby creating a tailored model without imposing excessive computational overhead on the client.
This approach directly tackles the dual challenges of system heterogeneity (varied device capabilities) and statistical heterogeneity (non-IID data across clients). By providing personalized, efficient submodels, SubFLOT enhances model performance on local tasks while maintaining the privacy-preserving collaborative spirit of FL. The framework represents a significant step toward making federated learning more viable for real-world applications, such as on mobile phones or IoT devices, where both efficiency and adaptation to local data patterns are critical for success.
🏷️ Themes
Artificial Intelligence, Machine Learning, Data Privacy, Algorithmic Efficiency
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
arXiv:2604.06631v1 Announce Type: cross
Abstract: Federated Learning (FL) enables collaborative model training while preserving data privacy, but its practical deployment is hampered by system and statistical heterogeneity. While federated network pruning offers a path to mitigate these issues, existing methods face a critical dilemma: server-side pruning lacks personalization, whereas client-side pruning is computationally prohibitive for resource-constrained devices. Furthermore, the pruning
Read full article at source