Batch normalization
Method of improving artificial neural network
📊 Rating
1 news mentions · 👍 0 likes · 👎 0 dislikes
📌 Topics
- Artificial Intelligence (1)
- Machine Learning (1)
- Reinforcement Learning (1)
🏷️ Keywords
PPO (1) · Batch Normalization (1) · Reinforcement Learning (1) · arXiv (1) · Neural Networks (1) · Policy Optimization (1) · Algorithm Stability (1)
📖 Key Information
In artificial neural networks, batch normalization (also known as batch norm) is a normalization technique used to make training faster and more stable by adjusting the inputs to each layer—re-centering them around zero and re-scaling them to a standard size. It was introduced by Sergey Ioffe and Christian Szegedy in 2015.
Experts still debate why batch normalization works so well.
📰 Related News (1)
-
🇺🇸 Mode-Dependent Rectification for Stable PPO Training
arXiv:2602.05619v1 Announce Type: cross Abstract: Mode-dependent architectural components (layers that behave differently during training and evaluat...
🔗 Entity Intersection Graph
People and organizations frequently mentioned alongside Batch normalization:
- 🌐 Neural network (1 shared articles)
- 🌐 Reinforcement learning (1 shared articles)
- 🌐 PPO (1 shared articles)