Classical Neural Networks: What does a Loss Function Landscape look like?

栏目: IT技术 · 发布时间: 5年前

内容简介:Every neural network’s objective/loss function is to be minimized! But what does this loss function really look like? Today, we will be showing the loss function for two different neural networks (N1, N2: fig.1).The family of loss function we will when tra
(rights: source )

Classical Neural Networks: What does a Loss Function Landscape look like?

Ever wondered on what kind of topology we were optimising our neural networks on? Well now you know!

Every neural network’s objective/loss function is to be minimized! But what does this loss function really look like? Today, we will be showing the loss function for two different neural networks (N1, N2: fig.1).

fig.1 (rights: own image)

The family of loss function we will when training are MSE (Mean Squared Error). Although other loss function family might be interesting, we will stick with this one for the purpose of illustration.

For people that are extra curious, we will be training N2 neural network (training part is not very interesting since, again what we want is a landscape illustration) on this distribution (fig.2: and yes, I am too lazy to add noise)

fig.2 (rights: own image)

And N1 neural network (fig.3):

fig.3 (rights: own image)

Loss function value as a function of the input (N2)

Let’s simply plot the loss function itself to begin with (fig.4).

fig.4 (rights: own image)

Remarks to be made:

  • We see that error values are particularly high around x=-2 and y in [-1,1].
  • Except seeing what a loss function looks like, having such an illustration can be useful for someone who wants to purposefully attack such a neural network! For an adversarial this can be a first exploratory step.

Loss function value as a function of weights (N1)

Being able to see loss function as a function of input is nice, but not exactly what people would be interested in. Seeing the landscape for optimisation is definitely better for crafting an architecture! Now as mentioned in my previous article N1 has 7 weights scalar to optimise on. Plotting a 7 dimension would only have very little point for our understanding, so we will be arbitrarily projecting on two dimension. Note that we fix the input, so that the variables are only the two weights. (fig.5)

fig.5 (rights: own image)

This is the landscape for one data point. Multiple things are to note:

  • If we were to optimise (minimum) on this plot, the two weights arbitrarily picked, then the loss for the one data point would obviously diminish.
  • Now to minimize such a function, any simple gradient search would be enough, and not even a SGD would be needed since it is a strict convex distribution but since it has a plateau, we would need some momentum descent instead of general descent to have a big enough gradient direction.

Such a plot can be done to multiple data points when using MSE as a loss metric. Being able to picture losses in general neural network is harder, because of the number of weights, but this can be a way to not randomly try optimisation algorithms when training, and instead understand the underlying data model that you want to approach. I hope again that this help people understand that Machine Learning is not black magic and truly requires analysis! Hyperparameter finding does not have to be random trials.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

集体智慧编程

集体智慧编程

TOBY SEGARAN / 莫映、王开福 / 电子工业出版社 / 2009-1 / 59.80元

本书以机器学习与计算统计为主题背景,专门讲述如何挖掘和分析Web上的数据和资源,如何分析用户体验、市场营销、个人品味等诸多信息,并得出有用的结论,通过复杂的算法来从Web网站获取、收集并分析用户的数据和反馈信息,以便创造新的用户价值和商业价值。全书内容翔实,包括协作过滤技术(实现关联产品推荐功能)、集群数据分析(在大规模数据集中发掘相似的数据子集)、搜索引擎核心技术(爬虫、索引、查询引擎、Page......一起来看看 《集体智慧编程》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

URL 编码/解码
URL 编码/解码

URL 编码/解码

XML、JSON 在线转换
XML、JSON 在线转换

在线XML、JSON转换工具