Dismantling Neural Networks to Understand the Inner Workings with Math and Pytorch

栏目: IT技术 · 发布时间: 5年前

Dismantling Neural Networks to Understand the Inner Workings with Math and Pytorch

Simplified math with examples and code to shed light inside black boxes

Jun 5 ·14min read

Dismantling Neural Networks to Understand the Inner Workings with Math and Pytorch

Photo by Florian Klauer on Unsplash

Motivation

As a child, you might have dismantled a toy in a moment of frenetic curiosity. You were drawn perhaps towards the source of the sound it made. Or perhaps it was a tempting colorful light from a diode that called you forth, moved your hands into cracking the plastic open.

Sometimes you may have felt deceived that the inside was nowhere close to what the shiny outside led you to imagine. I hope you have been lucky enough to open the right toys. Those filled with enough intricacies to make breaking them open worthwhile. Maybe you found a futuristic looking DC-motor. Or maybe a curious looking speaker with a strong magnet on its back that you tried on your fridge. I am sure it felt just right when you discovered what made your controller vibrate.

We are going to do exactly the same. We are dismantling a neural network with math and with Pytorch. It will be worthwhile, and our toy won’t even break. Maybe you feel discouraged. That’s understandable. There are so many different and complex parts in a neural network. It is overwhelming. It is the rite of passage to a wiser state.

So to help ourselves we will need a reference, some kind of Polaris to ensure we are on the right course. The pre-built functionalities of Pytorch will be our Polaris. They will tell us the output we must get. And it will fall upon us to find the logic that will lead us to the correct output. If differentiations sound like forgotten strangers that you once might have been acquainted with, fret not! We will make introductions again and it will all be mighty jovial.

I hope you will enjoy.

Linearity

The value of a neuron depends on its inputs, weights, and bias. To compute this value for all neurons in a layer, we calculate the dot product of the matrix of inputs with the matrix of weights, and we add the bias vector. We represent this concisely when we write:

The values of all neurons in one layer.

Conciseness in mathematical equations however, is achieved with abstraction of the inner workings. The price we pay for conciseness is making it harder to understand and mentally visualize the steps involved. And to be able to code and debug such intricate structures as Neural Networks we need both deep understanding and clear mental visualization. To that end, we favor verbosity:

The value of one neuron with three inputs, three weights, and a bias.

Now the equation is grounded with constraints imposed by a specific case: one neuron, three inputs, three weights, and a bias. We have moved away from abstraction to something more concrete, something we can easily implement:


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

旷世之战――IBM深蓝夺冠之路

旷世之战――IBM深蓝夺冠之路

纽伯 / 邵谦谦 / 清华大学出版社 / 2004-5 / 35.0

本书作者Monty Neworn是国际计算机象棋协公的主席,作者是用生动活泼的笔触描写了深蓝与卡斯帕罗夫之战这一引起全世界关注的历史事件的前前后后。由于作者的特殊身份和多年来对计算机象棋的关心,使他掌握了许多局外人不能得到的资料,记叙了很多鲜为人知的故事。全书行文流畅、文笔优美,对于棋局的描述更是跌宕起伏、险象环生,让读者好像又一次亲身经历了那场流动人心的战争。 本书作为一本科普读物......一起来看看 《旷世之战――IBM深蓝夺冠之路》 这本书的介绍吧!

JS 压缩/解压工具
JS 压缩/解压工具

在线压缩/解压 JS 代码

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具