Classical Neural Networks: What does a Loss Function Landscape look like?

栏目: IT技术 · 发布时间: 4年前

内容简介:Every neural network’s objective/loss function is to be minimized! But what does this loss function really look like? Today, we will be showing the loss function for two different neural networks (N1, N2: fig.1).The family of loss function we will when tra
(rights: source )

Classical Neural Networks: What does a Loss Function Landscape look like?

Ever wondered on what kind of topology we were optimising our neural networks on? Well now you know!

Every neural network’s objective/loss function is to be minimized! But what does this loss function really look like? Today, we will be showing the loss function for two different neural networks (N1, N2: fig.1).

fig.1 (rights: own image)

The family of loss function we will when training are MSE (Mean Squared Error). Although other loss function family might be interesting, we will stick with this one for the purpose of illustration.

For people that are extra curious, we will be training N2 neural network (training part is not very interesting since, again what we want is a landscape illustration) on this distribution (fig.2: and yes, I am too lazy to add noise)

fig.2 (rights: own image)

And N1 neural network (fig.3):

fig.3 (rights: own image)

Loss function value as a function of the input (N2)

Let’s simply plot the loss function itself to begin with (fig.4).

fig.4 (rights: own image)

Remarks to be made:

  • We see that error values are particularly high around x=-2 and y in [-1,1].
  • Except seeing what a loss function looks like, having such an illustration can be useful for someone who wants to purposefully attack such a neural network! For an adversarial this can be a first exploratory step.

Loss function value as a function of weights (N1)

Being able to see loss function as a function of input is nice, but not exactly what people would be interested in. Seeing the landscape for optimisation is definitely better for crafting an architecture! Now as mentioned in my previous article N1 has 7 weights scalar to optimise on. Plotting a 7 dimension would only have very little point for our understanding, so we will be arbitrarily projecting on two dimension. Note that we fix the input, so that the variables are only the two weights. (fig.5)

fig.5 (rights: own image)

This is the landscape for one data point. Multiple things are to note:

  • If we were to optimise (minimum) on this plot, the two weights arbitrarily picked, then the loss for the one data point would obviously diminish.
  • Now to minimize such a function, any simple gradient search would be enough, and not even a SGD would be needed since it is a strict convex distribution but since it has a plateau, we would need some momentum descent instead of general descent to have a big enough gradient direction.

Such a plot can be done to multiple data points when using MSE as a loss metric. Being able to picture losses in general neural network is harder, because of the number of weights, but this can be a way to not randomly try optimisation algorithms when training, and instead understand the underlying data model that you want to approach. I hope again that this help people understand that Machine Learning is not black magic and truly requires analysis! Hyperparameter finding does not have to be random trials.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

The Linux Programming Interface

The Linux Programming Interface

Michael Kerrisk / No Starch Press / 2010-11-6 / GBP 79.99

The Linux Programming Interface describes the Linux API (application programming interface)-the system calls, library functions, and other low-level interfaces that are used, directly or indirectly, b......一起来看看 《The Linux Programming Interface》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试