If Rectified Linear Units Are Linear, How Do They Add Nonlinearity?

栏目: IT技术 · 发布时间: 5年前

One may be inclined to point out that ReLUs cannot extrapolate; that is, a series of ReLUs fitted to resemble a sine wave from -4 < x < 4 will not be able to continue the sine wave for values of x outside of those bounds. It’s important to remember, however, that it’s not the goal of a neural network to extrapolate, the goal is to generalize. Consider, for instance, a model fitted to predict house price based on number of bathrooms and number of bedrooms. It doesn’t matter if the model struggles to carry the pattern to negative values of number of bathrooms or values of number of bedrooms exceeding five hundred, because it’s not the objective of the model. (You can read more about generalization vs extrapolation here .)

The strength of the ReLU function lies not in itself, but in an entire army of ReLUs. This is why using a few ReLUs in a neural network does not yield satisfactory results; instead, there must be an abundance of ReLU activations to allow the network to construct an entire map of points. In multi-dimensional space, rectified linear units combine to form complex polyhedra along the class boundaries.

Here lies the reason why ReLU works so well: when there are enough of them, they can approximate any function just as well as other activation functions like sigmoid or tanh, much like stacking hundreds of Legos, without the downsides. There are several issues with smooth-curve functions that do not occur with ReLU — one being that computing the derivative, or the rate of change, the driving force behind gradient descent, is much cheaper with ReLU than with any other smooth-curve function.

Another is that sigmoid and other curves have an issue with the vanishing gradient problem; because the derivative of the sigmoid function gradually slopes off for larger absolute values of x . Because the distributions of inputs may shift around heavily earlier during training away from 0, the derivative will be so small that no useful information can be backpropagated to update the weights. This is often a major problem in neural network training.

On the other hand, the derivative of the ReLU function is simple; it’s the slope of whatever line the input is on. It will reliably return a useful gradient, and while the fact that x = 0 { x < 0} may sometimes lead to a ‘dead neuron problem’, ReLU has still shown to be, in general, more powerful than not only curved functions (sigmoid, tanh) but also ReLU variants attempting to solve the dead neuron problem, like Leaky ReLU.

ReLU is designed to work in abundance; with heavy volume it approximates well, and with good approximation it performs just as well as any other activation function, without the downsides.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

财富博客

财富博客

Robert Scoble、Shel Israel / 李宛蓉 / 重庆出版社 / 2008 / 38.00元

《财富博客》作者斯考伯(美国最多人阅读的企业博客作者)和谢尔•以色列(资深技术顾问)相信,博客已经开始改变企业的面貌。他们阐释说明了员工博客如何改变外界对微软的观感,敢说敢做的NBA球队老板如何运用博客和球迷连接,小企业和大公司又如何能从博客获益……另外,还有万一使用不当,博客又会招致怎样的灾难。斯考伯和以色列直言不讳,这是卓越博客的基本要件,因此他们在说完博客的好处之后,也以同样诚实的心态讨论博......一起来看看 《财富博客》 这本书的介绍吧!

MD5 加密
MD5 加密

MD5 加密工具

XML、JSON 在线转换
XML、JSON 在线转换

在线XML、JSON转换工具

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具