Hierarchical Clustering: Agglomerative and Divisive — Explained

栏目: IT技术 · 发布时间: 5年前

内容简介:Hierarchical clustering is a method of cluster analysis that is used to cluster similar data points together. Hierarchical clustering follows either the top-down or bottom-up method of clustering.Clustering is an unsupervised machine learning technique tha

Hierarchical Clustering: Agglomerative and Divisive — Explained

An overview of agglomeration and divisive clustering algorithms and their implementation

Aug 2 ·5min read

Hierarchical Clustering: Agglomerative and Divisive — Explained

Photo by Lukas Blazek on Unsplash

Hierarchical clustering is a method of cluster analysis that is used to cluster similar data points together. Hierarchical clustering follows either the top-down or bottom-up method of clustering.

What is Clustering?

Clustering is an unsupervised machine learning technique that divides the population into several clusters such that data points in the same cluster are more similar and data points in different clusters are dissimilar.

  • Points in the same cluster are closer to each other.
  • Points in the different clusters are far apart.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample 2-dimension Dataset

In the above sample 2-dimension dataset, it is visible that the dataset forms 3 clusters that are far apart, and points in the same cluster are close to each other.

There are several types of clustering algorithms other than Hierarchical clusterings, such as k-Means clustering, DBSCAN, and many more. Read the below article to understand what is k-means clustering and how to implement it.

In this article, you can understand hierarchical clustering, its types.

There are two types of hierarchical clustering methods:

  1. Divisive Clustering
  2. Agglomerative Clustering

Divisive Clustering:

The divisive clustering algorithm is a top-down clustering approach, initially, all the points in the dataset belong to one cluster and split is performed recursively as one moves down the hierarchy.

Steps of Divisive Clustering:

  1. Initially, all points in the dataset belong to one single cluster.
  2. Partition the cluster into two least similar cluster
  3. Proceed recursively to form new clusters until the desired number of clusters is obtained.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), 1st Image: All the data points belong to one cluster, 2nd Image: 1 cluster is separated from the previous single cluster, 3rd Image: Further 1 cluster is separated from the previous set of clusters.

In the above sample dataset, it is observed that there is 3 cluster that is far separated from each other. So we stopped after getting 3 clusters.

Even if start separating further more clusters, below is the obtained result.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample dataset separated into 4 clusters

Hierarchical Clustering: Agglomerative and Divisive — Explained

How to choose which cluster to split?

Check the sum of squared errors of each cluster and choose the one with the largest value. In the below 2-dimension dataset, currently, the data points are separated into 2 clusters, for further separating it to form the 3rd cluster find the sum of squared errors (SSE) for each of the points in a red cluster and blue cluster.

Hierarchical Clustering: Agglomerative and Divisive — Explained

(Image by Author), Sample dataset separated into 2clusters

The cluster with the largest SSE value is separated into 2 clusters, hence forming a new cluster. In the above image, it is observed red cluster has larger SSE so it is separated into 2 clusters forming 3 total clusters.

How to split the above-chosen cluster?

Once we have decided to split which cluster, then the question arises on how to split the chosen cluster into 2 clusters. One way is to use Ward’s criterion to chase for the largest reduction in the difference in the SSE criterion as a result of the split.

How to handle the noise or outlier?

Due to the presence of outlier or noise, can result to form a new cluster of its own. To handle the noise in the dataset using a threshold to determine the termination criterion that means do not generate clusters that are too small.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

决战大数据

决战大数据

车品觉 / 浙江人民出版社 / 2014-3-1 / 45.9

[内容简介]  大数据时代的来临,给当今的商业带来了极大的冲击,多数电商人无不“谈大数据色变”,并呈现出一种观望、迷茫、手足无措的状态。车品觉,作为一名经验丰富的电商人,在敬畏大数据的同时,洞悉到了数据时代商业发展的更多契机,他创新了数据框架的建立和使用,重新量化了数据价值的指标,并挖掘了在无线数据和多屏时代下商业发展的本质……在他看来,改变思维方式,即可改变数据和商业的未来。  ......一起来看看 《决战大数据》 这本书的介绍吧!

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

MD5 加密
MD5 加密

MD5 加密工具

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具