ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation
#ZorBA #zeroth-order optimization #federated learning #large language models #heterogeneous block activation #fine-tuning #privacy #decentralized AI
📌 Key Takeaways
- ZorBA introduces a federated learning method for fine-tuning large language models (LLMs) without requiring first-order gradients.
- The approach uses zeroth-order optimization to handle heterogeneous data across decentralized clients.
- It incorporates heterogeneous block activation to selectively update model blocks, improving efficiency and reducing communication costs.
- This method enhances privacy by keeping data local while enabling collaborative model improvement.
- ZorBA aims to make LLM fine-tuning more accessible and scalable in federated settings.
📖 Full Retelling
arXiv:2603.04436v1 Announce Type: cross
Abstract: Federated fine-tuning of large language models (LLMs) enables collaborative tuning across distributed clients. However, due to the large size of LLMs, local updates in federated learning (FL) may incur substantial video random-access memory (VRAM) usage. Moreover, frequent model exchange may lead to significant communication overhead. To tackle these challenges, in this paper we propose ZorBA, a zeroth-order optimization-based federated fine-tun
🏷️ Themes
Federated Learning, LLM Optimization
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
--> Computer Science > Machine Learning arXiv:2603.04436 [Submitted on 19 Feb 2026] Title: ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation Authors: Chuiyang Meng , Ming Tang , Vincent W.S. Wong View a PDF of the paper titled ZorBA: Zeroth-order Federated Fine-tuning of LLMs with Heterogeneous Block Activation, by Chuiyang Meng and 2 other authors View PDF HTML Abstract: Federated fine-tuning of large language models enables collaborative tuning across distributed clients. However, due to the large size of LLMs, local updates in federated learning may incur substantial video random-access memory usage. Moreover, frequent model exchange may lead to significant communication overhead. To tackle these challenges, in this paper we propose ZorBA, a zeroth-order optimization-based federated fine-tuning framework with heterogeneous block activation. ZorBA leverages zeroth-order optimization to eliminate the storage of gradients at the clients by forward passes. ZorBA includes a heterogeneous block activation mechanism in which the central server allocates different subsets of transformer blocks to clients in order to accelerate the convergence rate and reduce the VRAM usage. Furthermore, ZorBA utilizes shared random seeds and the finite differences of gradients in order to reduce the communication overhead. We conduct theoretical analysis to characterize the effect of block activation decisions on the convergence rate and VRAM usage. To jointly enhance the convergence rate and reduce the VRAM usage, we formulate an optimization problem to optimize the block activation decisions. We propose an $\epsilon$-constraint lexicographic algorithm to solve this problem. Experimental results show that ZorBA outperforms three federated fine-tuning baselines in VRAM usage by up to 62.41% and incurs a low communication overhead. Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2603.04436 [cs.LG] (or arXiv:2603.04436...
Read full article at source