内容简介:A beginner’s guide to building a simple neural network completely from scratch in Go languageIn this tutorial, we’ll build a simple neural network (single-layerPerceptrons — invented by
A beginner’s guide to building a simple neural network completely from scratch in Go language
Apr 13 ·5min read
Introduction
In this tutorial, we’ll build a simple neural network (single-layer perceptron ) in Golang, completely from scratch. We’ll also train it on sample data and perform predictions. Creating your own neural network from scratch will help you better understand what’s happening inside a neural network and the working of learning algorithms.
What’s a Perceptron?
Perceptrons — invented by Frank Rosenblatt in 1958, are the simplest neural network that consists of n number of inputs, only one neuron and one output , where n is the number of features of our dataset.
Hence, our single-layer perceptron consists of the following components
- An input layer (x)
- An output layer (ŷ)
- A set of weights (w) and a bias (b) between these two layers
- An activation function (σ) for the output layer. In this tutorial, we’ll be using the sigmoid activation function.
Our neural network is called a single-layer perceptron (SLP) as the neural network has only one layer of neurons. Neural networks with more than one layer of neurons are called multi-layer perceptron (MLP) .
(note epoch refers to one cycle through the full training dataset)
Before we start
We’ll build our own functions for the following math operations — vector addition , vector dot product & scalar matrix multiplication.
Initially, the weights of the neural network are set to random float values between 0 and 1 while the bias is set to zero.
Forward Propagation
The process of passing the data through the neural network is known as forward-propagation or forward pass. The output the perceptron is
In a nutshell, the dot product of the weight vector (w) and the input vector (x) is added with the bias ( b) and the sum is passed through an activation function. The output of the sigmoid activation function will be from 0 and 1.
The Learning Algorithm
The learning algorithm consists of two parts — Backpropagation and Optimization.
Backpropagation, short for backward propagation of errors , refers to the algorithm for computing the gradient of the loss function with respect to the weights. However, the term is often used to refer to the entire learning algorithm.
A loss function is used to get an estimation of how far are we from our desired solution. Generally, mean squared error is chosen as the loss function for regression problems and cross-entropy for classification problems. To keep it simple, we’ll use mean squared error as our loss function. Also, we will not be calculating the MSE but directly calculate its gradient.
The gradient of the loss function is calculated using the chain rule . The gradients of the loss function with respect to the weights and bias are calculated as follows.
(for the derivation of these expressions, check my article in which I have briefly explained the math concepts behind neural networks )
Optimizationis the selection of best weights and bias of the perceptron to get the desired results. Let’s choose gradient descent as our optimization algorithm. The weights and the bias are updated as follows till convergence .
Learning rate ( α ) is a hyperparameter which is used to control how much the weights and bias are changed. However, we will not be using the l earning rate in this tutorial.
Assembling the Pieces
Now let’s train and make predictions out of our neural network on the following data. The data has three inputs and only one output belonging to two classes(0 and 1). Hence the data can be trained on our single-layer perceptron.
As you can see, the output Y is only dependent on the input X1 . Now we will train our neural network on the above data and check how it performs after 1000 epochs . To make predictions, we have to just do a forward propagation with the test inputs.
As we compare the predicted values with the actual values, we can see that our trained single-layer perceptron has performed well. We’ve successfully created a neural network and trained it to produce desirable results.
What’s Next?
Now you’ve created your own neural network completely from scratch. Here are a few things you shall try next.
- Test on your own data
- Try other activation function besides the sigmoid function
- Calculate MSE after each epoch
- Try other error function besides the MSE
- Try creating a multi-layer perceptron
Final Thoughts
I hope that you’ve learned a lot from creating your own neural network in Golang. I’ll be writing more on topics related to machine learning & deep learning and I’ll be hopefully covering multi-layer perceptron in my next article.
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
创业就是要细分垄断
李开复、汪华、傅盛 / 文化发展出版社 / 2017-5-1 / CNY 45.00
对各方面资源极为有限的创业公司而言,想在激烈的市场竞争中站立下来的第一步是:成为细分市场的垄断者。不管是资本还是尖端人才,追逐的永远是行业里尖端的企业,第二名毫无意义。 首先,要精准定位潜在市场。这个市场的需求仍没有被满足,并且潜力巨大。其次,抓住时代和行业的红利,通过高速增长实现“小垄断”,抢滩登陆。最后,在细分领域里建立起自己的竞争壁垒,应对巨头和竞争对手的复制,去扩展更大的市场,从而扩......一起来看看 《创业就是要细分垄断》 这本书的介绍吧!