Machine Learning Necessary for Deep Learning II

栏目: IT技术 · 发布时间: 4年前

内容简介:In the last article, we touched a bit on generalization.In general this article will introduce all the topics that are necessary concepts to understand and answer the question:What is the relationship between the

Machine Learning Necessary for Deep Learning

Generalization, Capacity, Parameters, HyperParameters & Bayesian Statistics

In the last article, we touched a bit on generalization.

In general this article will introduce all the topics that are necessary concepts to understand and answer the question:

What is the relationship between the generalization error and the training error ?

Short refresher

Generalization is the concept of the machine learning algorithm being able to produce good predictions on previously unseen inputs.

generalization and training error

The red line represents the training error. If the horizontal axis is the quantity of training examples or time, depending on how you like to think about it, then with time this training error gets smaller and smaller.

However this introduces an overfitting problem, the machine learning algorithm learns the training set so well so that, it doesn’t generalize well to new data.

The yellow line represents the test or generalization error. You will notice that the delta between red and yellow get bigger with more data.

i hope this graph helps you build intuition about these things, when you’re deep in deep learning stuff

Around that inflection point is where we wanna be at, the important question is how do we control the algorithm to get to that area?

We turn the dials on the Capacity of the learning algorithm.

When a model has low capacity, it will not be able to explain the true situation, it’s not able to fit, it’s not able to get a low enough training error, it tends to under-fit.

When a model has high capacity, it will be able to explain the true situation, however it might see patterns where there are none. Since we always assume some kind of noise in our signal, it might interpret that noise as a signal. Here the learning algorithm will overfit, therefore the generalization error will be too much.

What are some examples of the Capacity of a learning algorithm?

Hypothesis Spaceis one such metaphorical dial that you can control to get what you want out of the algorithm. For a linear regression, given a set of data points, we try to draw a line of best fit. Here our hypothesis space is 1.

For any kind of curved line of best fit, such as a polynomial, where we need 2 sets of coordinates to explain the output, we would have hypothesis space of 2.

Here it’s important to give another way to interpret the hypothesis space. You can think of the hypothesis space as the set of all possible outcomes. Consider the polynomial:

We know that it’s range is above 0 and domain is the set of all real numbers. We literally don’t need half the plane. Although we know that, since we are using a function that exists in the plane, we still pretend that our hypothesis space is 2. Plus at the time of choosing our function we don’t know the real space or shape of the real data generating process. Even if we did know it was x squared, we wouldn’t know it’s location.

This goes for any function in:

Furthermore this extends into all the dimensions…

The next capacity type is the Representational Capacity, inside the plane of:

You still have the choice to model the relationship with polynomials, trigonometric functions, logs, exponents etc. This family of functions that you can choose is called the Representational Capacity.

So in summary, capacity is loosely speaking how complex a relationship your algorithm can model. Its roughly measured by the number of parameters it takes in.

In practice, most algorithms don’t really try to find a mathematical function that works best to fit the data, instead it just minimizes the error.

Bayes Error

Call it measurement error or noise, there’s always some random error range when you work with real life data. Let’s say you knew exactly what the mathematical function was that generated data generating process, or the probability distribution. If you use this function to process the input data and produce a bunch of predicted values:

and compare it with the real labels:

There will be a discrepancy, this error is irreducible, meaning you can never get rid of it. This we call Bayes Error .

Other Generalizations

These are generally true:

  • More training examples means smaller training error
  • Even in the golden zone where there is not under-fitting or overfitting, there’s still discrepancy between the training error and generalization error
  • The training error is always smaller than the generalization error
  • The golden area is the area where there is optimal capacity, generally the complexity of your model matches the complexity of the real data generating process

Regularization

But first some theory. The No-Free-Lunch Theorem in Search & Optimization states that for some types of mathematical problems, finding a solution averaged over all the problems in the class, is the same for any solution method.

Imagine an infinite set of problems, represented by this random graph of dots. Now the red and yellow represent two different approaches to problem solving. If you average the distance between the problems and the yellow line, you get a value of alpha, let’s say.

Now you do the same with the red line, find all the distances between the red line and the dot, and average those values and get beta.

The NFL theorem states that these values are roughly the same. At least that’s how I can describe it visually. This is useful because we can think of the lines as 2 different machine learning algorithms. Under the subdomain of machine learning this basically says that no single machine learning algorithm is better for all problems universally, than other machine learning algorithms.

It just depends on the problem.

To me this means, research into machine learning can’t be about finding a single best algorithm. Possibly finding the best algorithm for a set of problems could be a feasible approach. And the researcher’s job would be to pick the set of problems really well.


以上所述就是小编给大家介绍的《Machine Learning Necessary for Deep Learning II》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

算法导论(原书第3版)

算法导论(原书第3版)

Thomas H.Cormen、Charles E.Leiserson、Ronald L.Rivest、Clifford Stein / 殷建平、徐云、王刚、刘晓光、苏明、邹恒明、王宏志 / 机械工业出版社 / 2012-12 / 128.00元

在有关算法的书中,有一些叙述非常严谨,但不够全面;另一些涉及了大量的题材,但又缺乏严谨性。本书将严谨性和全面性融为一体,深入讨论各类算法,并着力使这些算法的设计和分析能为各个层次的读者接受。全书各章自成体系,可以作为独立的学习单元;算法以英语和伪代码的形式描述,具备初步程序设计经验的人就能看懂;说明和解释力求浅显易懂,不失深度和数学严谨性。 全书选材经典、内容丰富、结构合理、逻辑清晰,对本科......一起来看看 《算法导论(原书第3版)》 这本书的介绍吧!

JSON 在线解析
JSON 在线解析

在线 JSON 格式化工具

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

SHA 加密
SHA 加密

SHA 加密工具