Machine Learning Necessary for Deep Learning II

栏目: IT技术 · 发布时间: 4年前

内容简介:In the last article, we touched a bit on generalization.In general this article will introduce all the topics that are necessary concepts to understand and answer the question:What is the relationship between the

Machine Learning Necessary for Deep Learning

Generalization, Capacity, Parameters, HyperParameters & Bayesian Statistics

In the last article, we touched a bit on generalization.

In general this article will introduce all the topics that are necessary concepts to understand and answer the question:

What is the relationship between the generalization error and the training error ?

Short refresher

Generalization is the concept of the machine learning algorithm being able to produce good predictions on previously unseen inputs.

generalization and training error

The red line represents the training error. If the horizontal axis is the quantity of training examples or time, depending on how you like to think about it, then with time this training error gets smaller and smaller.

However this introduces an overfitting problem, the machine learning algorithm learns the training set so well so that, it doesn’t generalize well to new data.

The yellow line represents the test or generalization error. You will notice that the delta between red and yellow get bigger with more data.

i hope this graph helps you build intuition about these things, when you’re deep in deep learning stuff

Around that inflection point is where we wanna be at, the important question is how do we control the algorithm to get to that area?

We turn the dials on the Capacity of the learning algorithm.

When a model has low capacity, it will not be able to explain the true situation, it’s not able to fit, it’s not able to get a low enough training error, it tends to under-fit.

When a model has high capacity, it will be able to explain the true situation, however it might see patterns where there are none. Since we always assume some kind of noise in our signal, it might interpret that noise as a signal. Here the learning algorithm will overfit, therefore the generalization error will be too much.

What are some examples of the Capacity of a learning algorithm?

Hypothesis Spaceis one such metaphorical dial that you can control to get what you want out of the algorithm. For a linear regression, given a set of data points, we try to draw a line of best fit. Here our hypothesis space is 1.

For any kind of curved line of best fit, such as a polynomial, where we need 2 sets of coordinates to explain the output, we would have hypothesis space of 2.

Here it’s important to give another way to interpret the hypothesis space. You can think of the hypothesis space as the set of all possible outcomes. Consider the polynomial:

We know that it’s range is above 0 and domain is the set of all real numbers. We literally don’t need half the plane. Although we know that, since we are using a function that exists in the plane, we still pretend that our hypothesis space is 2. Plus at the time of choosing our function we don’t know the real space or shape of the real data generating process. Even if we did know it was x squared, we wouldn’t know it’s location.

This goes for any function in:

Furthermore this extends into all the dimensions…

The next capacity type is the Representational Capacity, inside the plane of:

You still have the choice to model the relationship with polynomials, trigonometric functions, logs, exponents etc. This family of functions that you can choose is called the Representational Capacity.

So in summary, capacity is loosely speaking how complex a relationship your algorithm can model. Its roughly measured by the number of parameters it takes in.

In practice, most algorithms don’t really try to find a mathematical function that works best to fit the data, instead it just minimizes the error.

Bayes Error

Call it measurement error or noise, there’s always some random error range when you work with real life data. Let’s say you knew exactly what the mathematical function was that generated data generating process, or the probability distribution. If you use this function to process the input data and produce a bunch of predicted values:

and compare it with the real labels:

There will be a discrepancy, this error is irreducible, meaning you can never get rid of it. This we call Bayes Error .

Other Generalizations

These are generally true:

  • More training examples means smaller training error
  • Even in the golden zone where there is not under-fitting or overfitting, there’s still discrepancy between the training error and generalization error
  • The training error is always smaller than the generalization error
  • The golden area is the area where there is optimal capacity, generally the complexity of your model matches the complexity of the real data generating process

Regularization

But first some theory. The No-Free-Lunch Theorem in Search & Optimization states that for some types of mathematical problems, finding a solution averaged over all the problems in the class, is the same for any solution method.

Imagine an infinite set of problems, represented by this random graph of dots. Now the red and yellow represent two different approaches to problem solving. If you average the distance between the problems and the yellow line, you get a value of alpha, let’s say.

Now you do the same with the red line, find all the distances between the red line and the dot, and average those values and get beta.

The NFL theorem states that these values are roughly the same. At least that’s how I can describe it visually. This is useful because we can think of the lines as 2 different machine learning algorithms. Under the subdomain of machine learning this basically says that no single machine learning algorithm is better for all problems universally, than other machine learning algorithms.

It just depends on the problem.

To me this means, research into machine learning can’t be about finding a single best algorithm. Possibly finding the best algorithm for a set of problems could be a feasible approach. And the researcher’s job would be to pick the set of problems really well.


以上所述就是小编给大家介绍的《Machine Learning Necessary for Deep Learning II》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

指数型组织

指数型组织

萨利姆•伊斯梅尔 (Salim Ismail)、迈克尔•马隆 (Michael S. Malone)、尤里•范吉斯特 (Yuri van Geest) / 苏健 / 浙江人民出版社 / 2015-8-1 / CNY 69.90

《指数型组织》是一本指数级时代企业行动手册。作者奇点大学创始执行理事萨利姆·伊斯梅尔归纳了指数型组织的11个强大属性,并提出了建立指数型组织的12个关键步骤。通过自己创建的一套“指数商”测试题,伊斯梅尔还测量出了指数型组织世界100强。  为什么小米、海尔和阿里巴巴能进入“指数型组织世界100强”名单?“独角兽”Uber、Airbnb、谷歌等知名企业是如何指数化自己的组织的?  未......一起来看看 《指数型组织》 这本书的介绍吧!

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具