Batch vs Stochastic Gradient Descent

栏目: IT技术 · 发布时间: 5年前

Batch vs Stochastic Gradient Descent

Learn difference between Batch & Stochastic Gradient Descent and choose best descent for your model.

May 31 ·4min read

Batch vs Stochastic Gradient Descent

Photo by Bailey Zindel on Unsplash

Before diving into Gradient Descent, we’ll look how a Linear Regression model deals with Cost function. Main motive to reach Global minimum is to minimize Cost function which is given by,

Batch vs Stochastic Gradient Descent

Here, Hypothesis represents linear equation where, theta(0) is the bias AKA intercept and theta(1) are the weight(slope) given to the feature ‘x’.

Batch vs Stochastic Gradient Descent

Fig: 1

Weights and intercept are randomly initialized taking baby step to reach minimum point. An important parameter in Gradient Descent is the size of the steps, determined by the learning rate hyper-parameter. It’s important to note that if we set high value of learning rate, point will end up taking large steps and probably will not reach global minimum( having large errors). On the other hand, if we take small value of learning rate, purple point will take large amount of time to reach global minimum. Therefore, Optimal learning rate should be taken.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

GOOGLE HACKS

GOOGLE HACKS

Rael Dornfest、Tara Calishain / 卞军、谢伟华、朱炜 / 电子工业 / 2006-1 / 49.00元

GOOGLE HACKS巧妙使用网络搜索的技巧和工具(第二版)一起来看看 《GOOGLE HACKS》 这本书的介绍吧!

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

RGB CMYK 转换工具
RGB CMYK 转换工具

RGB CMYK 互转工具

HEX CMYK 转换工具
HEX CMYK 转换工具

HEX CMYK 互转工具