ALL ABOUT UNSUPERVISED LEARNING

Kartikeya Mishra
4 min readFeb 21, 2022

All About Clustering : Hierarchical Clustering (Agglomerative, Divisive), Partitioned clustering (K-means, Fuzzy C-means)

In Unsupervised Learning the data has no labels.The Machine just look for the whatever pattern it can find .

Supervised Learning / Unsupervised Learning

Supervised Learning

Deals with the labelled data where the output patterns are known to the system.

Less complex.

Conducts offline analysis.

Comparatively more accurate and reliable results.

Include Classification and Regression. Include Classification and Regression.

Unsupervised Learning

Work with unlabelled data in which the output is just based on the collection of perception.

More complex.

Perform real time analysis.

Moderately accurate but reliable results.

Include clustering and associative rules mining problems.

Clustering :

“Clustering” is the process of grouping similar entities together. The goal of this unsupervised machine learning technique is to find similarities in the data point and group similar data points together.

Need of clustering :

  1. To determine the intrinsic grouping in a set unlabelled data.
  2. To organise data into clusters showing the internal structure of the data.
  3. To partition the data points.
  4. To understand and extract value from large sets of structured and unstructured data.

Types of Clustering :

  1. Hierarchical Clustering : A tree structure that has a set of nested clusters. These are of two types. a.) Agglomerative b.)Divisive
  2. Partitioned clustering : A division of the set of data objects into non – overlapping sets or clusters such that every data objects is in just one subset. These are of two types a.) K-means b.)Fuzzy C-means.
Partitioned clustering / Hierarchical Clustering
Hierarchical Clustering

Agglomerative Clustering :

In agglomerative or bottom-up clustering method we assign each observation to its own cluster. Then, compute the similarity (e.g., distance) between each of the clusters and join the two most similar clusters. Finally, repeat steps 2 and 3 until there is only a single cluster left.

Divisive Clustering :

In divisive or top-down clustering method we assign all of the observations to a single cluster and then partition the cluster to two least similar clusters. Finally, we proceed recursively on each cluster until there is one cluster for each observation. There is evidence that divisive algorithms produce more accurate hierarchies than agglomerative algorithms in some circumstances but is conceptually more complex.

Working : Hierarchical clustering

  1. Assign each item to its own cluster,such that if you have N number of items,you now have N number of clusters.
  2. Find the closest (most similar) pair of clusters and merge them into a single cluster.Now you have one cluster less.
  3. Compute distances (similarities) between the new cluster and every old cluster.
  4. Repeat steps two and tree until all items are clustered into a single cluster of size N.

Distance Measures :

  1. Complete – Linkage Clustering : Find the maximum possible distance between points belonging to two different clusters.
  2. Single-Linkage Clustering : Find the minimum possible distance between points belonging to two different clusters.
  3. Mean – Linkage Clustering : Find all possible pair-wise distances for points belonging to two different clusters and then calculate the average.
  4. Centroid -Linkage Clustering : Find the centroids of each cluster and. calculate the distance between them.

K-Means Algorithm : A iterative clustering algorithm whose goal is to find maxima in each iteration.

Steps :

  1. Specify the desired number of clusters K
  2. Randomly assign each data point to a cluster
  3. Compute cluster centroids
  4. Reassign each point to the closet cluster centroid and recompute cluster centroids.
K-Means Algorithm

Optimal numbers of clusters :

It is the fundamental issue in k-mean clustering.

  1. If you plot k against the SSE,you will see that the report decreases as K increases.
  2. This is because their size decreases and hence distortion is also smaller.
  3. The goal of the elbow method is to choose k where SSE decreases abruptly.
Optimal numbers of clusters

Logistics Regression :

Logistics Regression use to predict binary outcomes for a given set of independent variable. The dependant variable’s outcome is discrete, such. That y belongs to {0,1}. A binary dependent variable can have only values such as 0 or 1 , win or lose ,pass or fail , healthy or sick .

Logistics Regression

Sigmoid Function Equation

Sigmoid Function – The probability in logistic regression s represented by the sigmoid function (logistic function or the S- Curve )

Sigmoid Function

EXTRA :

Hi everyone,
Hope you all are doing great,
I was going through the internet and found that people were demanding ‘Unsupervised Learning with (Machine Learning) python’ so I created a series on Machine Learning including ‘Unsupervised Learning EXPLANATION and CODING’. If you are into video learning I am also providing a link to the video. Hope you like it.

Video link : https://bit.ly/3AeLxY8

--

--

Kartikeya Mishra

All about new technology in fun and easy way so that you can be confident in it and make your own piece of work using this knowledge !