Backpropagation from scratch on Mini-Batches

栏目: IT技术 · 发布时间: 4年前

内容简介:Implementation of Backpropagation algo on mini-batches with step by step execution of equations.You must be thinking, another Backprop from scratch blog? Well kinda yes but I thought this through and came up with something that you can use to tinker around

Implementation of Backpropagation algo on mini-batches with step by step execution of equations.

Apr 19 ·4min read

Backpropagation from scratch on Mini-Batches

Photo by john foust on Unsplash

You must be thinking, another Backprop from scratch blog? Well kinda yes but I thought this through and came up with something that you can use to tinker around along with easy to understand equations that you usually write down to understand the algorithm.

This blog will focus on implementing the Backpropagation algorithm step-by-step on mini-batches of the dataset. There are plenty of tutorials and blogs to demonstrate the backpropagation algorithm in detail and all the logic behind calculus and algebra happening. So I’ll skip that part and cut to equations in math and implementation using Python (coz why not).

Why from scratch?

This has been a long time community question as to why we should implement an algorithm from scratch even if it’s been readily available to put to use by almost all frameworks. Evidently while using certain high-level frameworks you can’t even notice backpropagation doing its magic. To understand it upside down, in and out completely you should once try to make your hands dirty with this stuff. Backpropagation is something on which experimentation can be done while playing around.

Why Mini-Batches?

The reason behind mini-batches is simple. It saves memory and processing time by dividing data into mini-batches and supply the algorithm a fraction of the dataset on each iteration of the training loop. Feeding a 10000x10000 matrix at once would not only blow up memory but would take a long time to run. Instead, bringing it down to 50 per iteration would not only reduce memory usage but you can track progress.

Note-This is different from the stochastic method where we take a stratified sample from data for each class and train on that assuming the model would generalize.

Implementation time!

This is the head of the data I’ll be using for this implementation.

Backpropagation from scratch on Mini-Batches

The target variable here is Occupancy which is a categorical variable (0/1).

This will be the architecture we’ll be coding.

Backpropagation from scratch on Mini-Batches

Algorithm:

for i:=1 to i:=m:

  1. Perform Forward propagation or Forward pass to calculate Activation values of neurons in each layer.
Backpropagation from scratch on Mini-Batches

2. Backpropagation step:

  • Calculate the error term (MSE or LogLoss or your wish) using the label in the data:
Backpropagation from scratch on Mini-Batches
  • Error terms in the hidden layers are calculated using:

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

More Eric Meyer on CSS (Voices That Matter)

More Eric Meyer on CSS (Voices That Matter)

Eric A. Meyer / New Riders Press / 2004-04-08 / USD 45.00

Ready to commit to using more CSS on your sites? If you are a hands-on learner who has been toying with CSS and want to experiment with real-world projects that will enable you to see how CSS......一起来看看 《More Eric Meyer on CSS (Voices That Matter)》 这本书的介绍吧!

随机密码生成器
随机密码生成器

多种字符组合密码

SHA 加密
SHA 加密

SHA 加密工具

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具