SP
BravenNow
FedEMA-Distill: Exponential Moving Average Guided Knowledge Distillation for Robust Federated Learning
| USA | technology | βœ“ Verified - arxiv.org

FedEMA-Distill: Exponential Moving Average Guided Knowledge Distillation for Robust Federated Learning

#FedEMA-Distill #exponential moving average #knowledge distillation #federated learning #robustness #client drift #model convergence

πŸ“Œ Key Takeaways

  • FedEMA-Distill introduces a federated learning method combining exponential moving average (EMA) with knowledge distillation.
  • The approach enhances model robustness by mitigating data heterogeneity and client drift in distributed settings.
  • It leverages EMA to stabilize local model updates and distillation to transfer knowledge from a global teacher model.
  • Experiments show improved performance and convergence over existing federated learning techniques.

πŸ“– Full Retelling

arXiv:2603.04422v1 Announce Type: cross Abstract: Federated learning (FL) often degrades when clients hold heterogeneous non-Independent and Identically Distributed (non-IID) data and when some clients behave adversarially, leading to client drift, slow convergence, and high communication overhead. This paper proposes FedEMA-Distill, a server-side procedure that combines an exponential moving average (EMA) of the global model with ensemble knowledge distillation from client-uploaded prediction

🏷️ Themes

Federated Learning, Machine Learning

Entity Intersection Graph

No entity connections available yet for this article.

}
Original Source
--> Computer Science > Machine Learning arXiv:2603.04422 [Submitted on 15 Feb 2026] Title: FedEMA-Distill: Exponential Moving Average Guided Knowledge Distillation for Robust Federated Learning Authors: Hamza Reguieg , Mohamed El Kamili , Essaid Sabir View a PDF of the paper titled FedEMA-Distill: Exponential Moving Average Guided Knowledge Distillation for Robust Federated Learning, by Hamza Reguieg and 2 other authors View PDF HTML Abstract: Federated learning often degrades when clients hold heterogeneous non-Independent and Identically Distributed (non-IID) data and when some clients behave adversarially, leading to client drift, slow convergence, and high communication overhead. This paper proposes FedEMA-Distill, a server-side procedure that combines an exponential moving average of the global model with ensemble knowledge distillation from client-uploaded prediction logits evaluated on a small public proxy dataset. Clients run standard local training, upload only compressed logits, and may use different model architectures, so no changes are required to client-side software while still supporting model heterogeneity across devices. Experiments on CIFAR-10, CIFAR-100, FEMNIST, and AG News under Dirichlet-0.1 label skew show that FedEMA-Distill improves top-1 accuracy by several percentage points (up to +5% on CIFAR-10 and +6% on CIFAR-100) over representative baselines, reaches a given target accuracy in 30-35% fewer communication rounds, and reduces per-round client uplink payloads to 0.09-0.46 MB, i.e., roughly an order of magnitude less than transmitting full model weights. Using coordinate-wise median or trimmed-mean aggregation of logits at the server further stabilizes training in the presence of up to 10-20% Byzantine clients and yields well-calibrated predictions under attack. These results indicate that coupling temporal smoothing with logits-only aggregation provides a communication-efficient and attack-resilient FL pipeline that is deployment-friend...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

πŸ‡¬πŸ‡§ United Kingdom

πŸ‡ΊπŸ‡¦ Ukraine