Dismantling Neural Networks to Understand the Inner Workings with Math and Pytorch

栏目: IT技术 · 发布时间: 4年前

Dismantling Neural Networks to Understand the Inner Workings with Math and Pytorch

Simplified math with examples and code to shed light inside black boxes

Jun 5 ·14min read

Dismantling Neural Networks to Understand the Inner Workings with Math and Pytorch

Photo by Florian Klauer on Unsplash

Motivation

As a child, you might have dismantled a toy in a moment of frenetic curiosity. You were drawn perhaps towards the source of the sound it made. Or perhaps it was a tempting colorful light from a diode that called you forth, moved your hands into cracking the plastic open.

Sometimes you may have felt deceived that the inside was nowhere close to what the shiny outside led you to imagine. I hope you have been lucky enough to open the right toys. Those filled with enough intricacies to make breaking them open worthwhile. Maybe you found a futuristic looking DC-motor. Or maybe a curious looking speaker with a strong magnet on its back that you tried on your fridge. I am sure it felt just right when you discovered what made your controller vibrate.

We are going to do exactly the same. We are dismantling a neural network with math and with Pytorch. It will be worthwhile, and our toy won’t even break. Maybe you feel discouraged. That’s understandable. There are so many different and complex parts in a neural network. It is overwhelming. It is the rite of passage to a wiser state.

So to help ourselves we will need a reference, some kind of Polaris to ensure we are on the right course. The pre-built functionalities of Pytorch will be our Polaris. They will tell us the output we must get. And it will fall upon us to find the logic that will lead us to the correct output. If differentiations sound like forgotten strangers that you once might have been acquainted with, fret not! We will make introductions again and it will all be mighty jovial.

I hope you will enjoy.

Linearity

The value of a neuron depends on its inputs, weights, and bias. To compute this value for all neurons in a layer, we calculate the dot product of the matrix of inputs with the matrix of weights, and we add the bias vector. We represent this concisely when we write:

The values of all neurons in one layer.

Conciseness in mathematical equations however, is achieved with abstraction of the inner workings. The price we pay for conciseness is making it harder to understand and mentally visualize the steps involved. And to be able to code and debug such intricate structures as Neural Networks we need both deep understanding and clear mental visualization. To that end, we favor verbosity:

The value of one neuron with three inputs, three weights, and a bias.

Now the equation is grounded with constraints imposed by a specific case: one neuron, three inputs, three weights, and a bias. We have moved away from abstraction to something more concrete, something we can easily implement:


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

理想主义者

理想主义者

[美] 贾斯汀·彼得斯 / 程静、柳筠 / 重庆出版社 / 2018-5-15 / 49.80元

2013年1月11日,年仅26岁的黑客亚伦·斯沃茨自杀身亡,此事在美国引起轩然大波。这不仅是因为在互联网领域,斯沃茨是一个可以与比尔·盖茨、马克·扎克伯格、理查德·斯托曼等齐名的人,更是因为此事揭露了传统世界与互联网世界的规则冲突。 在互联网思维下,信息是明码标价的商品。各种利益方用技术竖起了一道道藩篱,将支付不起费用但渴望用知识改变命运的人隔绝在外。于是,一大批希望改变这种模式的“理想主义......一起来看看 《理想主义者》 这本书的介绍吧!

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

URL 编码/解码
URL 编码/解码

URL 编码/解码