内容简介:Batch Normalization is a technique which takes care of normalizing the input of each layer to make the training process faster and more stable. In practice, it is an extra layer that we generally add after the computation layer and before the non-linearity
Introduction
Apr 20 ·4min read
Batch Normalization is a technique which takes care of normalizing the input of each layer to make the training process faster and more stable. In practice, it is an extra layer that we generally add after the computation layer and before the non-linearity.
It consists of 2 steps:
- Normalize the batch by first subtracting its mean μ, then dividing it by its standard deviation σ.
- Further scale by a factor γ and shift by a factor β. Those are the parameters of the batch normalization layer, required in case of the network not needing the data to have a mean of 0 and a standard deviation of 1.
Due to its efficiency for training neural networks, batch normalization is now widely used. But how useful is it at inference time?
Once the training has ended, each batch normalization layer possesses a specific set of γ and β, but also μ and σ, the latter being computed using an exponentially weighted average during training. It means that during inference, the batch normalization acts as a simple linear transformation of what comes out of the previous layer, often a convolution.
As a convolution is also a linear transformation, it also means that both operations can be merged into a single linear transformation!
This would remove some unnecessary parameters but also reduce the number of operations to be performed at inference time.
以上所述就是小编给大家介绍的《Speed-up inference with Batch Normalization Folding》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
深入理解TensorFlow:架构设计与实现原理
彭靖田、林健、白小龙 / 人民邮电出版社 / 2018-5-1 / 79.00元
本书以TensorFlow 1.2为基础,从基本概念、内部实现和实践等方面深入剖析了TensorFlow。书中首先介绍了TensorFlow设计目标、基本架构、环境准备和基础概念,接着重点介绍了以数据流图为核心的机器学习编程框架的设计原则与核心实现,紧接着还将TensorFlow与深度学习相结合,从理论基础和程序实现这两个方面系统介绍了CNN、GAN和RNN等经典模型,然后深入剖析了TensorF......一起来看看 《深入理解TensorFlow:架构设计与实现原理》 这本书的介绍吧!