Multi-Task Learning with Additive U-Net for Image Denoising and Classification
#Additive U-Net #Image Denoising #Multi-Task Learning #U-Net Architecture #Skip Connections #Gated Additive Fusion #Feature Dimensionality #Joint Optimization
📌 Key Takeaways
- Researchers developed Additive U-Net architecture for improved image denoising and multi-task learning
- The innovation replaces concatenative skip connections with gated additive fusion
- This approach constrains shortcut capacity while maintaining fixed feature dimensionality
- The architecture stabilizes joint optimization for multiple tasks
- The research shows promising results across single-task and multi-task scenarios
📖 Full Retelling
Researchers have developed a new neural network architecture called Additive U-Net (AddUNet) for image denoising and multi-task learning, as detailed in their paper published on arXiv on February 26, 2026. The innovation involves replacing traditional concatenative skip connections with gated additive fusion in U-Net architectures, which constrains shortcut capacity while maintaining consistent feature dimensionality throughout the network depth. This structural modification aims to improve information flow between encoder and decoder sections and stabilize joint optimization processes for multiple tasks. The research demonstrates how this architectural approach addresses fundamental challenges in multi-task learning scenarios where networks must balance competing objectives.
The Additive U-Net represents a significant advancement in computer vision, particularly for applications requiring simultaneous image denoising and classification capabilities. By implementing additive skip fusion instead of concatenative skips, the researchers have created a more efficient architecture that preserves fixed feature dimensionality across network depth. This approach provides structural regularization that helps control the information flow between the encoder and decoder sections of the network. The paper documents extensive testing across single-task and multi-task scenarios, showing promising results for image denoising applications while maintaining performance on classification tasks. This innovation could have significant implications for medical imaging, autonomous vehicles, and other fields where high-quality image processing is essential.
🏷️ Themes
Neural Network Architecture, Computer Vision, Multi-Task Learning
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
arXiv:2602.12649v1 Announce Type: cross
Abstract: We investigate additive skip fusion in U-Net architectures for image denoising and denoising-centric multi-task learning (MTL). By replacing concatenative skips with gated additive fusion, the proposed Additive U-Net (AddUNet) constrains shortcut capacity while preserving fixed feature dimensionality across depth. This structural regularization induces controlled encoder-decoder information flow and stabilizes joint optimization. Across single-t
Read full article at source