SP
BravenNow
Elimination-compensation pruning for fully-connected neural networks
| USA | technology | ✓ Verified - arxiv.org

Elimination-compensation pruning for fully-connected neural networks

#Neural Network Pruning #Elimination-Compensation #Model Compression #Deep Learning #Artificial Intelligence #Optimization Techniques #Sparsity #Machine Learning

📌 Key Takeaways

  • Novel pruning method compensates for removed weights through adjacent bias perturbations
  • Importance of each weight is calculated considering output behavior after optimal bias perturbations
  • The approach maintains network sparsity while preserving critical information
  • Demonstrated superior efficiency compared to traditional pruning methods across diverse scenarios

📖 Full Retelling

Researchers Enrico Ballini, Luca Muscarnera, Alessio Fumagalli, Anna Scotti, and Francesco Regazzoni introduced a novel neural network pruning method called 'elimination-compensation pruning' on February 24, 2026, through their paper published on arXiv, addressing the challenge of balancing model compression with information preservation in deep neural networks. The researchers challenge the traditional assumption in pruning techniques that expendable weights simply have small impact on network error, proposing instead a more sophisticated approach where removed weights are compensated through perturbations of adjacent biases. This innovation allows for maintaining network sparsity while preserving critical information that might otherwise be lost during the pruning process. The method computes the importance of each weight by analyzing the network's output behavior after applying optimal perturbations to adjacent biases, a calculation made efficient through automatic differentiation techniques. After developing analytical expressions for these quantities, the team conducted extensive numerical experiments comparing their approach against leading pruning strategies across diverse machine learning scenarios, demonstrating consistent efficiency and effectiveness in model compression without sacrificing performance.

🏷️ Themes

Machine Learning, Neural Networks, Model Optimization

📚 Related People & Topics

Deep learning

Deep learning

Branch of machine learning

In machine learning, deep learning focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and revolves around stacking artificial neurons into layers and "training" t...

View Profile → Wikipedia ↗
Artificial intelligence

Artificial intelligence

Intelligence of machines

# Artificial Intelligence (AI) **Artificial Intelligence (AI)** is a specialized field of computer science dedicated to the development and study of computational systems capable of performing tasks typically associated with human intelligence. These tasks include learning, reasoning, problem-solvi...

View Profile → Wikipedia ↗

Entity Intersection Graph

Connections for Deep learning:

🌐 Explainable artificial intelligence 3 shared
🌐 Medical imaging 2 shared
🌐 Applications of artificial intelligence 1 shared
🌐 Plant pathology 1 shared
🌐 Unmanned aerial vehicle 1 shared
View full profile
Original Source
--> Computer Science > Machine Learning arXiv:2602.20467 [Submitted on 24 Feb 2026] Title: Elimination-compensation pruning for fully-connected neural networks Authors: Enrico Ballini , Luca Muscarnera , Alessio Fumagalli , Anna Scotti , Francesco Regazzoni View a PDF of the paper titled Elimination-compensation pruning for fully-connected neural networks, by Enrico Ballini and 4 other authors View PDF HTML Abstract: The unmatched ability of Deep Neural Networks in capturing complex patterns in large and noisy datasets is often associated with their large hypothesis space, and consequently to the vast amount of parameters that characterize model architectures. Pruning techniques affirmed themselves as valid tools to extract sparse representations of neural networks parameters, carefully balancing between compression and preservation of information. However, a fundamental assumption behind pruning is that expendable weights should have small impact on the error of the network, while highly important weights should tend to have a larger influence on the inference. We argue that this idea could be generalized; what if a weight is not simply removed but also compensated with a perturbation of the adjacent bias, which does not contribute to the network sparsity? Our work introduces a novel pruning method in which the importance measure of each weight is computed considering the output behavior after an optimal perturbation of its adjacent bias, efficiently computable by automatic differentiation. These perturbations can be then applied directly after the removal of each weight, independently of each other. After deriving analytical expressions for the aforementioned quantities, numerical experiments are conducted to benchmark this technique against some of the most popular pruning strategies, demonstrating an intrinsic efficiency of the proposed approach in very diverse machine learning scenarios. Finally, our findings are discussed and the theoretical implications of ou...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine