内容简介:These Regularization methods are used to overcome the problem of over-fitting in Linear Regression modelsOne of the major problem in Linear Regression algorithm is it tries to over-fit the training set and when accuracy is checked on test set it performs v
Ridge, Lasso and Elastic Net Regularization methods
These Regularization methods are used to overcome the problem of over-fitting in Linear Regression models
Jun 7 ·3min read
One of the major problem in Linear Regression algorithm is it tries to over-fit the training set and when accuracy is checked on test set it performs verrryy bad! Therefore, to penalize weight(slope), Regularization methods are used.
Ridge Regression (L2 Regularization)
Ridge Regression Equation is given by,
We have seen the first part of this equation i.e. MSE (θ) as the Cost Function in Linear Regression. Here in Ridge Regression, one more term is added to Cost function i.e Regularization term.
‘θ’ denotes weight(slope) applied to the parameters in datasets. As the value of slope increases the model is likely to incline towards over-fitting and if we visualize it, the points connected by these slopes is more of peaks and valleys.
Term ‘α’ penalizes slope having high magnitude. The hyper-parameter α controls how much you want to regularize the model. Higher the value of α, more flatten will be the curve, which leads to under-fitting. On the other hand, if we put lower value of α, model is prone to over-fitting. For α = 0, Cost function(J) will be equal to MSE(θ). Therefore, tuning of α is required to get best from Ridge Regression.
Lasso Regression (L1 Regularization)
Lasso(Least Absolute Shrinkage and Selection Operator) Regression equation is given by,
In Lasso, magnitude of weight(slope) is taken into consideration unlike square of weights in Ridge Regression. Hyper-parameter ‘α’ has the same function that was in Ridge. An important characteristic of Lasso Regression is that it tends to completely eliminate the weights of least important features (i.e., set them to zero). In other words, it automatically performs feature selection and outputs a sparse model (i.e., with few nonzero feature weights).
If we use Lasso regression and accuracy comes out to be less then it’s OK because Linear and Ridge Regression has n-dimension but Lasso yields less dimensions comparatively, with very less decrease in accuracy which is good for building production model.
Elastic Net
Elastic Net is a middle ground between Ridge Regression and Lasso Regression. The regularization term is a simple mix of both Ridge and Lasso’s regularization terms, and you can control the mix ratio r. When r = 0, Elastic Net is equivalent to Ridge Regression, and when r = 1, it is equivalent to Lasso Regression.
Conclusion
So what to choose for our project, should we choose Linear Regression(i.e. without any Regularization), Ridge or Lasso?
It is always advisable to use some amount of Regularization so as to eliminate over-fitting in case of Linear Regression. So, now it boils down to whether to choose Ridge or Lasso.
Ridge is a good default, but if you suspect that only a few features are actually useful, you should prefer Lasso or Elastic Net since they tend to reduce the useless features’ weights down to zero as we have discussed. In general, Elastic Net is preferred over Lasso since Lasso may behave erratically when the number of features is greater than the number of training instances or when several features are strongly correlated.
Got any questions?
Email: amarmandal2153@gmail.com
Thank youuuu…
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Boolean Reasoning
Brown, Frank Markham / 2003-4 / $ 19.15
A systematic treatment of Boolean reasoning, this concise, newly revised edition combines the works of early logicians with recent investigations, including previously unpublished research results. Th......一起来看看 《Boolean Reasoning》 这本书的介绍吧!