Deep Dive into Competitive Learning of Self-Organizing Maps

栏目: IT技术 · 发布时间: 4年前

内容简介:The self-organizing map is one of the most popular Unsupervised learning Artificial Neural Networks where the system has no prior knowledge about the features or characteristics of the input data and the class labels of the output data. The network learns

Best Explanation Behind Competitive Learning of Self-Organising Maps (SOMs) in Unsupervised Artificial Neural Networks on the Internet!

Deep Dive into Competitive Learning of Self-Organizing Maps

Unsupervised Artificial Neural Networks

The self-organizing map is one of the most popular Unsupervised learning Artificial Neural Networks where the system has no prior knowledge about the features or characteristics of the input data and the class labels of the output data. The network learns to form classes/clusters of sample input patterns according to similarities among them. Patterns in a cluster would have similar features. There is no prior knowledge as to what features are important for classification, and how many classes are there. The network itself adjusts for different classes of inputs as its name mentions they self organize. The number of nodes in the weighted layer corresponds to the number of different classes. It is based on Competitive Learning.

Deep Dive into Competitive Learning of Self-Organizing Maps

Self Organizing Maps Network Structure

What is Competitive Learning?

Deep Dive into Competitive Learning of Self-Organizing Maps

Competitive Learning

In competitive learning, the nodes associated with weights compete with each other to win an input pattern (vector). For each distinct input pattern, the node with the highest response is determined and declared as the winner. Only the weights associated with the winning node are trained to make them even more similar to the input pattern (vector). Weights of all the other nodes are not changed. Winner takes all and the losers get nothing. So its called a Winner Takes All algorithm (Losers gets nothing).

Strength of a Node = Weighted Sum

For Output Node 1

Deep Dive into Competitive Learning of Self-Organizing Maps

Y1 = X1W11+ X2W21 + X3W31 + ..……+ XDWD1

Each Node is Associated with a Weight Vector having D Elements

Input Vector X — [X1, X2, X3,……., XD]

Weight Vector of Y1 — [W11, W21, W31,…., WD1]

Training Algorithm

  • Estimate No. of Classes (No. of Output Nodes)
  • Set Weights randomly and normalize
  • Apply the normalized Input Vector X
  • Calculate Strength (i.e. Weighted Sum) of Each Node
  • Determine the Node i with the Highest Response
  • Declare Node i as the ‘Winner’ ( i has the Weights most similar to X)
  • Train Weights of Node i to make them even more similar to X

Training Process

Deep Dive into Competitive Learning of Self-Organizing Maps

Training Artificial Neural Networks

During Training:

  • Weight Vector of Winner is made more Equal to Current Input Vector
  • In other words, Current Input Vector is Transferred to Winner

After Training:

  • Winner Carries the Input it Won (The weight vector of the winning node now retains the input pattern which it has been trained for)
  • Any Successive Inputs similar to Previous select this Node as Winner

Most of the neural network experts don’t know about the theory behind the training process. It is said that during training the Weight Vector of Winner is made more Equal to Current Input Vector. But most people lack the knowledge about the theory behind making the weight vector equal to the input vector. So here I would like to explain this theory with basic mathematics in neural competing.

Scalars and Vectors

Scalar

  • A Scalar has only Magnitude e.g. length, area, volume, speed, mass, density, pressure, temperature

Vector

  • A Vector has both Magnitude and Direction e.g. displacement, direction, velocity, acceleration, momentum, force, weight

Normalization of Input and Weight Vectors

  • For convenient training, both Input and Weight Vectors are normalized to a unit length
  • Normalization Process is explained below

Consider Vector

X = [ X1 X2 X3 ……… XD ]

Norm of X =

Normalized

Deep Dive into Competitive Learning of Self-Organizing Maps
  • Norm of a Vector is said to be the ‘Strength’ of the Vector i.e. its Magnitude
  • Norm of a Normalized Vector is 1 (unit vector)
  • i.e. If X is a Vector

e.g. X = [0.2, 0.1, 1.4, 0.2];

Normalized

X = [0.1397 0.0698 0.9778 0.1397]

Norm of Normalized

X =Ö ((0.1397)2 + (0.0698)2 + 0.9778) 2 + 0.1397) 2 ) = 1

Normalization

A Normalised Vector has elements between 0 and 1. When the Input Features are from different scales e.g. [1.2 0.001 10.6], normalization brings them to a uniform standard. When Weight Vectors are also normalized, the Training process becomes simple. When all input patterns are normalized to a unit length, they can be represented as different radii in a unit sphere (different orientations).

Deep Dive into Competitive Learning of Self-Organizing Maps

Input patterns normalized to a unit length represented by radii in a unit sphere

The below diagram shows the normalized weight vectors in a unit sphere and the input vector represented in that existing sphere.

Deep Dive into Competitive Learning of Self-Organizing Maps

Input vector and weight vectors represented in a unit sphere

Here the Length of each Vector = 1.

Before and After Normalisation

Deep Dive into Competitive Learning of Self-Organizing Maps

Before and after normalization

What is required for the net to encode the training set, is that the weight vectors become aligned with any clusters present in this set. Each cluster is represented by at least one node. Then when a vector is presented to the net, there will be a node, or a group of nodes, which will respond maximally to the input.

The similarity of Two Vectors

Deep Dive into Competitive Learning of Self-Organizing Maps

Similarity of two vectors
  • If X1 = [x1, x2 , x3, x4] and Y1 = [y1, y2 , y3, y4] then X1 = Y1

if and only if

x1 = y1

x2 = y2

x3 = y3

x4 = y4

X1 and Y1 are said to be ‘identical’.

  • Consider Vectors X and Y

Dot Product X.Y = |X||Y|.Cos q

|X|- Vector Length

q — Angler between the two Vectors

If |X| = 1 and |Y| = 1

X.Y = Cos q and 0 <= Cos q <= 1

If q -> 0 ( then Cosq -> 1)

Two Unit Vectors Coincide

i.e. Both Vectors (X and Y) are Equal

i.e. X Coincides with Y

Deep Dive into Competitive Learning of Self-Organizing Maps

Training process of two vectors to make them equal
X.Y = |X|.|Y|Cos q = 1.1.Cos

q

When

q -> 0 Vector X = Vector Y

So we change the angle q between the two vectors in order to make two normalized vectors equal.

Training of SOMs

So during training, we find a winning node to a given input pattern with the highest response value. Then the Weight Vector of Winner is made more Equal to Current Input Vector. According to the mathematical explanation above what we do during training is to adjust the angle between the normalized input vector and the normalized weight vector of the winning node until the two vectors coincide with each other. In other words, until the two vectors become equal.

Training makes Weights of a Particular Node similar to the Applied Input. In other words, the input vector is ‘Transferred ’ to the Winning Node in the form of its Weights. When a Similar Input Vector is applied, the Weighted Sum of the same Winner will be the Highest.

Deep Dive into Competitive Learning of Self-Organizing Maps

Training Process

Similarly, this process is continued for all the input patterns until the input vector coincides with the weight vector of the winning node for each input pattern.

Training Equation — Kohonen Learning Rule

Deep Dive into Competitive Learning of Self-Organizing Maps

Kohonen Learning Rule

We can verify this mathematical explanation using the Kohonen learning rule where the network weight is adjusted only for the winning output node where the output is 1. Otherwise, the weight adjustment is zero because the output is zero. When the output is 1 the weight is adjusted by making the input vector X and weight vector W of the winning node equal to each other. This is done by adjusting the angle between these two vectors. When the two vectors coincide with each other the network is trained and no further weight adjustment is needed. This process is continued to all the input patterns until the artificial neural network is fully trained.

Final Thought

Deep Dive into Competitive Learning of Self-Organizing Maps

Final Thought

I hope this article would help you to understand the actual theory behind the competitive learning of Self-Organising Maps (SOMs) in Unsupervised Artificial Neural Networks. This article was written with the aim of sharing important knowledge of my experienced lecturer with the rest of the world. All the credit goes to my University senior lecturer Dr. H.L. Premaratne who is specialized in,

  • Neural Networks and Pattern Recognition
  • Image Processing and Computer Vision

This was written based on his request that there was a lack of articles explaining this theory on the internet. I hope you all gain this valuable knowledge from an expert in this area.

Thank you!!!…


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

精通正则表达式

精通正则表达式

Jeffrey E. F. Friedl / 余晟 / 电子工业出版社 / 2007 / 75

随着互联网的迅速发展,几乎所有工具软件和程序语言都支持的正则表达式也变得越来越强大和易于使用。本书是讲解正则表达式的经典之作。本书主要讲解了正则表达式的特性和流派、匹配原理、优化原则、实用诀窍以及调校措施,并详细介绍了正则表达式在Perl、Java、.NET、PHP中的用法。 本书自第1 版开始着力于教会读者“以正则表达式来思考”,来让读者真正“精通”正则表达式。该版对PHP的相关内容、Ja......一起来看看 《精通正则表达式》 这本书的介绍吧!

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试

HEX CMYK 转换工具
HEX CMYK 转换工具

HEX CMYK 互转工具